00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3989 00:00:00.001 originally caused by: 00:00:00.001 Started by user Berger, Michal 00:00:00.050 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.051 The recommended git tool is: git 00:00:00.051 using credential 00000000-0000-0000-0000-000000000002 00:00:00.055 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.082 Fetching changes from the remote Git repository 00:00:00.086 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.233 > git --version # 'git version 2.39.2' 00:00:00.233 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.301 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.301 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.696 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.716 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.730 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:04.730 > git config core.sparsecheckout # timeout=10 00:00:04.744 > git read-tree -mu HEAD # timeout=10 00:00:04.766 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:04.788 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:04.788 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:04.957 [Pipeline] Start of Pipeline 00:00:04.971 [Pipeline] library 00:00:04.972 Loading library shm_lib@master 00:00:04.972 Library shm_lib@master is cached. Copying from home. 00:00:04.984 [Pipeline] node 00:00:20.016 Still waiting to schedule task 00:00:20.017 Waiting for next available executor on ‘vagrant-vm-host’ 00:17:04.157 Running on VM-host-SM0 in /var/jenkins/workspace/raid-vg-autotest 00:17:04.160 [Pipeline] { 00:17:04.172 [Pipeline] catchError 00:17:04.173 [Pipeline] { 00:17:04.187 [Pipeline] wrap 00:17:04.196 [Pipeline] { 00:17:04.205 [Pipeline] stage 00:17:04.233 [Pipeline] { (Prologue) 00:17:04.252 [Pipeline] echo 00:17:04.254 Node: VM-host-SM0 00:17:04.260 [Pipeline] cleanWs 00:17:04.269 [WS-CLEANUP] Deleting project workspace... 00:17:04.269 [WS-CLEANUP] Deferred wipeout is used... 00:17:04.276 [WS-CLEANUP] done 00:17:04.460 [Pipeline] setCustomBuildProperty 00:17:04.547 [Pipeline] httpRequest 00:17:05.093 [Pipeline] echo 00:17:05.095 Sorcerer 10.211.164.101 is alive 00:17:05.106 [Pipeline] retry 00:17:05.108 [Pipeline] { 00:17:05.123 [Pipeline] httpRequest 00:17:05.127 HttpMethod: GET 00:17:05.128 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:17:05.128 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:17:05.129 Response Code: HTTP/1.1 200 OK 00:17:05.130 Success: Status code 200 is in the accepted range: 200,404 00:17:05.131 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:17:05.276 [Pipeline] } 00:17:05.295 [Pipeline] // retry 00:17:05.303 [Pipeline] sh 00:17:05.586 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:17:05.728 [Pipeline] httpRequest 00:17:06.097 [Pipeline] echo 00:17:06.099 Sorcerer 10.211.164.101 is alive 00:17:06.108 [Pipeline] retry 00:17:06.110 [Pipeline] { 00:17:06.124 [Pipeline] httpRequest 00:17:06.128 HttpMethod: GET 00:17:06.129 URL: http://10.211.164.101/packages/spdk_83ba9086796471697a4975a58f60e2392bccd08c.tar.gz 00:17:06.129 Sending request to url: http://10.211.164.101/packages/spdk_83ba9086796471697a4975a58f60e2392bccd08c.tar.gz 00:17:06.132 Response Code: HTTP/1.1 200 OK 00:17:06.132 Success: Status code 200 is in the accepted range: 200,404 00:17:06.133 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_83ba9086796471697a4975a58f60e2392bccd08c.tar.gz 00:17:11.579 [Pipeline] } 00:17:11.597 [Pipeline] // retry 00:17:11.608 [Pipeline] sh 00:17:11.887 + tar --no-same-owner -xf spdk_83ba9086796471697a4975a58f60e2392bccd08c.tar.gz 00:17:15.230 [Pipeline] sh 00:17:15.505 + git -C spdk log --oneline -n5 00:17:15.506 83ba90867 fio/bdev: fix typo in README 00:17:15.506 45379ed84 module/compress: Cleanup vol data, when claim fails 00:17:15.506 0afe95a3a bdev/nvme: use bdev_nvme linker script 00:17:15.506 1cbacb58f test/nvmf: Clarify comment about lack of support for iWARP in tests 00:17:15.506 169c3cd04 thread: set SPDK_CONFIG_MAX_NUMA_NODES to 1 if not defined 00:17:15.525 [Pipeline] withCredentials 00:17:15.535 > git --version # timeout=10 00:17:15.548 > git --version # 'git version 2.39.2' 00:17:15.560 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:17:15.562 [Pipeline] { 00:17:15.571 [Pipeline] retry 00:17:15.574 [Pipeline] { 00:17:15.589 [Pipeline] sh 00:17:15.863 + git ls-remote http://dpdk.org/git/dpdk main 00:17:15.873 [Pipeline] } 00:17:15.890 [Pipeline] // retry 00:17:15.896 [Pipeline] } 00:17:15.911 [Pipeline] // withCredentials 00:17:15.921 [Pipeline] httpRequest 00:17:16.273 [Pipeline] echo 00:17:16.275 Sorcerer 10.211.164.101 is alive 00:17:16.284 [Pipeline] retry 00:17:16.286 [Pipeline] { 00:17:16.293 [Pipeline] httpRequest 00:17:16.296 HttpMethod: GET 00:17:16.297 URL: http://10.211.164.101/packages/dpdk_6dad0bb5c8621644beca86ff5f4910a943ba604d.tar.gz 00:17:16.297 Sending request to url: http://10.211.164.101/packages/dpdk_6dad0bb5c8621644beca86ff5f4910a943ba604d.tar.gz 00:17:16.298 Response Code: HTTP/1.1 200 OK 00:17:16.299 Success: Status code 200 is in the accepted range: 200,404 00:17:16.299 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_6dad0bb5c8621644beca86ff5f4910a943ba604d.tar.gz 00:17:17.376 [Pipeline] } 00:17:17.389 [Pipeline] // retry 00:17:17.396 [Pipeline] sh 00:17:17.674 + tar --no-same-owner -xf dpdk_6dad0bb5c8621644beca86ff5f4910a943ba604d.tar.gz 00:17:19.585 [Pipeline] sh 00:17:19.863 + git -C dpdk log --oneline -n5 00:17:19.863 6dad0bb5c8 event/cnxk: fix getwork write data on reconfig 00:17:19.863 b74f298f9b test/event: fix device stop 00:17:19.863 34e3ad3a1e eventdev: remove single event enqueue and dequeue 00:17:19.863 5079ede71e event/skeleton: remove single event enqueue and dequeue 00:17:19.863 a83fc0f4e1 event/cnxk: remove single event enqueue and dequeue 00:17:19.882 [Pipeline] writeFile 00:17:19.898 [Pipeline] sh 00:17:20.178 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:17:20.192 [Pipeline] sh 00:17:20.475 + cat autorun-spdk.conf 00:17:20.475 SPDK_RUN_FUNCTIONAL_TEST=1 00:17:20.475 SPDK_RUN_ASAN=1 00:17:20.475 SPDK_RUN_UBSAN=1 00:17:20.475 SPDK_TEST_RAID=1 00:17:20.475 SPDK_TEST_NATIVE_DPDK=main 00:17:20.475 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:17:20.475 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:17:20.482 RUN_NIGHTLY=1 00:17:20.484 [Pipeline] } 00:17:20.500 [Pipeline] // stage 00:17:20.517 [Pipeline] stage 00:17:20.519 [Pipeline] { (Run VM) 00:17:20.532 [Pipeline] sh 00:17:20.812 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:17:20.812 + echo 'Start stage prepare_nvme.sh' 00:17:20.812 Start stage prepare_nvme.sh 00:17:20.812 + [[ -n 2 ]] 00:17:20.812 + disk_prefix=ex2 00:17:20.812 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:17:20.812 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:17:20.812 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:17:20.812 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:17:20.812 ++ SPDK_RUN_ASAN=1 00:17:20.812 ++ SPDK_RUN_UBSAN=1 00:17:20.812 ++ SPDK_TEST_RAID=1 00:17:20.812 ++ SPDK_TEST_NATIVE_DPDK=main 00:17:20.812 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:17:20.813 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:17:20.813 ++ RUN_NIGHTLY=1 00:17:20.813 + cd /var/jenkins/workspace/raid-vg-autotest 00:17:20.813 + nvme_files=() 00:17:20.813 + declare -A nvme_files 00:17:20.813 + backend_dir=/var/lib/libvirt/images/backends 00:17:20.813 + nvme_files['nvme.img']=5G 00:17:20.813 + nvme_files['nvme-cmb.img']=5G 00:17:20.813 + nvme_files['nvme-multi0.img']=4G 00:17:20.813 + nvme_files['nvme-multi1.img']=4G 00:17:20.813 + nvme_files['nvme-multi2.img']=4G 00:17:20.813 + nvme_files['nvme-openstack.img']=8G 00:17:20.813 + nvme_files['nvme-zns.img']=5G 00:17:20.813 + (( SPDK_TEST_NVME_PMR == 1 )) 00:17:20.813 + (( SPDK_TEST_FTL == 1 )) 00:17:20.813 + (( SPDK_TEST_NVME_FDP == 1 )) 00:17:20.813 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:17:20.813 + for nvme in "${!nvme_files[@]}" 00:17:20.813 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:17:20.813 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:17:20.813 + for nvme in "${!nvme_files[@]}" 00:17:20.813 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:17:20.813 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:17:20.813 + for nvme in "${!nvme_files[@]}" 00:17:20.813 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:17:20.813 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:17:20.813 + for nvme in "${!nvme_files[@]}" 00:17:20.813 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:17:20.813 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:17:20.813 + for nvme in "${!nvme_files[@]}" 00:17:20.813 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:17:20.813 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:17:20.813 + for nvme in "${!nvme_files[@]}" 00:17:20.813 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:17:20.813 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:17:20.813 + for nvme in "${!nvme_files[@]}" 00:17:20.813 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:17:21.379 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:17:21.379 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:17:21.379 + echo 'End stage prepare_nvme.sh' 00:17:21.379 End stage prepare_nvme.sh 00:17:21.391 [Pipeline] sh 00:17:21.675 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:17:21.675 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:17:21.675 00:17:21.675 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:17:21.675 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:17:21.675 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:17:21.675 HELP=0 00:17:21.675 DRY_RUN=0 00:17:21.675 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:17:21.675 NVME_DISKS_TYPE=nvme,nvme, 00:17:21.675 NVME_AUTO_CREATE=0 00:17:21.675 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:17:21.675 NVME_CMB=,, 00:17:21.675 NVME_PMR=,, 00:17:21.675 NVME_ZNS=,, 00:17:21.675 NVME_MS=,, 00:17:21.675 NVME_FDP=,, 00:17:21.675 SPDK_VAGRANT_DISTRO=fedora39 00:17:21.675 SPDK_VAGRANT_VMCPU=10 00:17:21.675 SPDK_VAGRANT_VMRAM=12288 00:17:21.675 SPDK_VAGRANT_PROVIDER=libvirt 00:17:21.675 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:17:21.675 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:17:21.675 SPDK_OPENSTACK_NETWORK=0 00:17:21.675 VAGRANT_PACKAGE_BOX=0 00:17:21.675 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:17:21.675 FORCE_DISTRO=true 00:17:21.675 VAGRANT_BOX_VERSION= 00:17:21.675 EXTRA_VAGRANTFILES= 00:17:21.675 NIC_MODEL=e1000 00:17:21.675 00:17:21.675 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:17:21.675 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:17:24.962 Bringing machine 'default' up with 'libvirt' provider... 00:17:26.335 ==> default: Creating image (snapshot of base box volume). 00:17:26.336 ==> default: Creating domain with the following settings... 00:17:26.336 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730122119_38c9fbc42a1f9f790383 00:17:26.336 ==> default: -- Domain type: kvm 00:17:26.336 ==> default: -- Cpus: 10 00:17:26.336 ==> default: -- Feature: acpi 00:17:26.336 ==> default: -- Feature: apic 00:17:26.336 ==> default: -- Feature: pae 00:17:26.336 ==> default: -- Memory: 12288M 00:17:26.336 ==> default: -- Memory Backing: hugepages: 00:17:26.336 ==> default: -- Management MAC: 00:17:26.336 ==> default: -- Loader: 00:17:26.336 ==> default: -- Nvram: 00:17:26.336 ==> default: -- Base box: spdk/fedora39 00:17:26.336 ==> default: -- Storage pool: default 00:17:26.336 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730122119_38c9fbc42a1f9f790383.img (20G) 00:17:26.336 ==> default: -- Volume Cache: default 00:17:26.336 ==> default: -- Kernel: 00:17:26.336 ==> default: -- Initrd: 00:17:26.336 ==> default: -- Graphics Type: vnc 00:17:26.336 ==> default: -- Graphics Port: -1 00:17:26.336 ==> default: -- Graphics IP: 127.0.0.1 00:17:26.336 ==> default: -- Graphics Password: Not defined 00:17:26.336 ==> default: -- Video Type: cirrus 00:17:26.336 ==> default: -- Video VRAM: 9216 00:17:26.336 ==> default: -- Sound Type: 00:17:26.336 ==> default: -- Keymap: en-us 00:17:26.336 ==> default: -- TPM Path: 00:17:26.336 ==> default: -- INPUT: type=mouse, bus=ps2 00:17:26.336 ==> default: -- Command line args: 00:17:26.336 ==> default: -> value=-device, 00:17:26.336 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:17:26.336 ==> default: -> value=-drive, 00:17:26.336 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:17:26.336 ==> default: -> value=-device, 00:17:26.336 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:17:26.336 ==> default: -> value=-device, 00:17:26.336 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:17:26.336 ==> default: -> value=-drive, 00:17:26.336 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:17:26.336 ==> default: -> value=-device, 00:17:26.336 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:17:26.336 ==> default: -> value=-drive, 00:17:26.336 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:17:26.336 ==> default: -> value=-device, 00:17:26.336 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:17:26.336 ==> default: -> value=-drive, 00:17:26.336 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:17:26.336 ==> default: -> value=-device, 00:17:26.336 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:17:26.595 ==> default: Creating shared folders metadata... 00:17:26.595 ==> default: Starting domain. 00:17:28.499 ==> default: Waiting for domain to get an IP address... 00:17:46.575 ==> default: Waiting for SSH to become available... 00:17:46.575 ==> default: Configuring and enabling network interfaces... 00:17:48.482 default: SSH address: 192.168.121.77:22 00:17:48.482 default: SSH username: vagrant 00:17:48.482 default: SSH auth method: private key 00:17:51.012 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:17:57.565 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:18:04.125 ==> default: Mounting SSHFS shared folder... 00:18:05.545 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:18:05.545 ==> default: Checking Mount.. 00:18:06.479 ==> default: Folder Successfully Mounted! 00:18:06.479 ==> default: Running provisioner: file... 00:18:07.414 default: ~/.gitconfig => .gitconfig 00:18:07.673 00:18:07.673 SUCCESS! 00:18:07.673 00:18:07.673 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:18:07.673 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:18:07.673 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:18:07.673 00:18:07.682 [Pipeline] } 00:18:07.700 [Pipeline] // stage 00:18:07.710 [Pipeline] dir 00:18:07.711 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:18:07.713 [Pipeline] { 00:18:07.727 [Pipeline] catchError 00:18:07.729 [Pipeline] { 00:18:07.744 [Pipeline] sh 00:18:08.027 + vagrant ssh-config --host vagrant 00:18:08.027 + sed -ne /^Host/,$p 00:18:08.027 + tee ssh_conf 00:18:12.217 Host vagrant 00:18:12.217 HostName 192.168.121.77 00:18:12.217 User vagrant 00:18:12.217 Port 22 00:18:12.217 UserKnownHostsFile /dev/null 00:18:12.217 StrictHostKeyChecking no 00:18:12.217 PasswordAuthentication no 00:18:12.217 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:18:12.217 IdentitiesOnly yes 00:18:12.217 LogLevel FATAL 00:18:12.217 ForwardAgent yes 00:18:12.217 ForwardX11 yes 00:18:12.217 00:18:12.232 [Pipeline] withEnv 00:18:12.235 [Pipeline] { 00:18:12.250 [Pipeline] sh 00:18:12.529 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:18:12.530 source /etc/os-release 00:18:12.530 [[ -e /image.version ]] && img=$(< /image.version) 00:18:12.530 # Minimal, systemd-like check. 00:18:12.530 if [[ -e /.dockerenv ]]; then 00:18:12.530 # Clear garbage from the node's name: 00:18:12.530 # agt-er_autotest_547-896 -> autotest_547-896 00:18:12.530 # $HOSTNAME is the actual container id 00:18:12.530 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:18:12.530 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:18:12.530 # We can assume this is a mount from a host where container is running, 00:18:12.530 # so fetch its hostname to easily identify the target swarm worker. 00:18:12.530 container="$(< /etc/hostname) ($agent)" 00:18:12.530 else 00:18:12.530 # Fallback 00:18:12.530 container=$agent 00:18:12.530 fi 00:18:12.530 fi 00:18:12.530 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:18:12.530 00:18:12.798 [Pipeline] } 00:18:12.814 [Pipeline] // withEnv 00:18:12.822 [Pipeline] setCustomBuildProperty 00:18:12.838 [Pipeline] stage 00:18:12.840 [Pipeline] { (Tests) 00:18:12.858 [Pipeline] sh 00:18:13.173 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:18:13.446 [Pipeline] sh 00:18:13.725 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:18:13.740 [Pipeline] timeout 00:18:13.740 Timeout set to expire in 1 hr 30 min 00:18:13.742 [Pipeline] { 00:18:13.757 [Pipeline] sh 00:18:14.035 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:18:14.600 HEAD is now at 83ba90867 fio/bdev: fix typo in README 00:18:14.612 [Pipeline] sh 00:18:14.890 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:18:15.162 [Pipeline] sh 00:18:15.438 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:18:15.728 [Pipeline] sh 00:18:16.006 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:18:16.265 ++ readlink -f spdk_repo 00:18:16.265 + DIR_ROOT=/home/vagrant/spdk_repo 00:18:16.265 + [[ -n /home/vagrant/spdk_repo ]] 00:18:16.265 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:18:16.265 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:18:16.265 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:18:16.265 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:18:16.265 + [[ -d /home/vagrant/spdk_repo/output ]] 00:18:16.265 + [[ raid-vg-autotest == pkgdep-* ]] 00:18:16.265 + cd /home/vagrant/spdk_repo 00:18:16.265 + source /etc/os-release 00:18:16.265 ++ NAME='Fedora Linux' 00:18:16.265 ++ VERSION='39 (Cloud Edition)' 00:18:16.265 ++ ID=fedora 00:18:16.265 ++ VERSION_ID=39 00:18:16.265 ++ VERSION_CODENAME= 00:18:16.265 ++ PLATFORM_ID=platform:f39 00:18:16.265 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:18:16.265 ++ ANSI_COLOR='0;38;2;60;110;180' 00:18:16.265 ++ LOGO=fedora-logo-icon 00:18:16.265 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:18:16.265 ++ HOME_URL=https://fedoraproject.org/ 00:18:16.266 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:18:16.266 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:18:16.266 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:18:16.266 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:18:16.266 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:18:16.266 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:18:16.266 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:18:16.266 ++ SUPPORT_END=2024-11-12 00:18:16.266 ++ VARIANT='Cloud Edition' 00:18:16.266 ++ VARIANT_ID=cloud 00:18:16.266 + uname -a 00:18:16.266 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:18:16.266 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:18:16.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:16.784 Hugepages 00:18:16.784 node hugesize free / total 00:18:16.784 node0 1048576kB 0 / 0 00:18:16.784 node0 2048kB 0 / 0 00:18:16.784 00:18:16.784 Type BDF Vendor Device NUMA Driver Device Block devices 00:18:16.784 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:18:16.784 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:18:16.784 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:18:16.784 + rm -f /tmp/spdk-ld-path 00:18:16.784 + source autorun-spdk.conf 00:18:16.784 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:18:16.784 ++ SPDK_RUN_ASAN=1 00:18:16.784 ++ SPDK_RUN_UBSAN=1 00:18:16.784 ++ SPDK_TEST_RAID=1 00:18:16.784 ++ SPDK_TEST_NATIVE_DPDK=main 00:18:16.784 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:18:16.784 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:18:16.784 ++ RUN_NIGHTLY=1 00:18:16.784 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:18:16.784 + [[ -n '' ]] 00:18:16.784 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:18:16.784 + for M in /var/spdk/build-*-manifest.txt 00:18:16.784 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:18:16.784 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:18:16.784 + for M in /var/spdk/build-*-manifest.txt 00:18:16.784 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:18:16.784 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:18:16.784 + for M in /var/spdk/build-*-manifest.txt 00:18:16.784 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:18:16.784 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:18:16.784 ++ uname 00:18:16.784 + [[ Linux == \L\i\n\u\x ]] 00:18:16.784 + sudo dmesg -T 00:18:16.784 + sudo dmesg --clear 00:18:16.784 + dmesg_pid=5980 00:18:16.784 + [[ Fedora Linux == FreeBSD ]] 00:18:16.784 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:16.784 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:18:16.784 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:18:16.784 + [[ -x /usr/src/fio-static/fio ]] 00:18:16.784 + sudo dmesg -Tw 00:18:16.784 + export FIO_BIN=/usr/src/fio-static/fio 00:18:16.784 + FIO_BIN=/usr/src/fio-static/fio 00:18:16.784 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:18:16.784 + [[ ! -v VFIO_QEMU_BIN ]] 00:18:16.784 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:18:16.784 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:16.784 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:18:16.784 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:18:16.784 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:16.784 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:18:16.784 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:18:16.784 Test configuration: 00:18:16.784 SPDK_RUN_FUNCTIONAL_TEST=1 00:18:16.784 SPDK_RUN_ASAN=1 00:18:16.784 SPDK_RUN_UBSAN=1 00:18:16.784 SPDK_TEST_RAID=1 00:18:16.784 SPDK_TEST_NATIVE_DPDK=main 00:18:16.784 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:18:16.784 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:18:17.045 RUN_NIGHTLY=1 13:29:30 -- common/autotest_common.sh@1688 -- $ [[ n == y ]] 00:18:17.045 13:29:30 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:17.045 13:29:30 -- scripts/common.sh@15 -- $ shopt -s extglob 00:18:17.045 13:29:30 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:18:17.045 13:29:30 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.045 13:29:30 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.045 13:29:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.045 13:29:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.045 13:29:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.045 13:29:30 -- paths/export.sh@5 -- $ export PATH 00:18:17.045 13:29:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.045 13:29:30 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:18:17.045 13:29:30 -- common/autobuild_common.sh@486 -- $ date +%s 00:18:17.045 13:29:30 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730122170.XXXXXX 00:18:17.045 13:29:30 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730122170.9qAaA9 00:18:17.045 13:29:30 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:18:17.045 13:29:30 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:18:17.045 13:29:30 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:18:17.045 13:29:31 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:18:17.045 13:29:31 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:18:17.045 13:29:31 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:18:17.045 13:29:31 -- common/autobuild_common.sh@502 -- $ get_config_params 00:18:17.045 13:29:31 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:18:17.045 13:29:31 -- common/autotest_common.sh@10 -- $ set +x 00:18:17.045 13:29:31 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:18:17.045 13:29:31 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:18:17.045 13:29:31 -- pm/common@17 -- $ local monitor 00:18:17.045 13:29:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:17.045 13:29:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:17.045 13:29:31 -- pm/common@25 -- $ sleep 1 00:18:17.045 13:29:31 -- pm/common@21 -- $ date +%s 00:18:17.045 13:29:31 -- pm/common@21 -- $ date +%s 00:18:17.045 13:29:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730122171 00:18:17.045 13:29:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730122171 00:18:17.045 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730122171_collect-vmstat.pm.log 00:18:17.045 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730122171_collect-cpu-load.pm.log 00:18:17.984 13:29:32 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:18:17.984 13:29:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:18:17.984 13:29:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:18:17.984 13:29:32 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:18:17.984 13:29:32 -- spdk/autobuild.sh@16 -- $ date -u 00:18:17.984 Mon Oct 28 01:29:32 PM UTC 2024 00:18:17.984 13:29:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:18:17.984 v25.01-pre-122-g83ba90867 00:18:17.984 13:29:32 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:18:17.984 13:29:32 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:18:17.984 13:29:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:18:17.984 13:29:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:18:17.984 13:29:32 -- common/autotest_common.sh@10 -- $ set +x 00:18:17.984 ************************************ 00:18:17.984 START TEST asan 00:18:17.984 ************************************ 00:18:17.984 using asan 00:18:17.984 13:29:32 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:18:17.984 00:18:17.984 real 0m0.000s 00:18:17.984 user 0m0.000s 00:18:17.984 sys 0m0.000s 00:18:17.984 13:29:32 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:18:17.984 ************************************ 00:18:17.984 END TEST asan 00:18:17.984 ************************************ 00:18:17.984 13:29:32 asan -- common/autotest_common.sh@10 -- $ set +x 00:18:17.984 13:29:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:18:17.984 13:29:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:18:17.984 13:29:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:18:17.984 13:29:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:18:17.984 13:29:32 -- common/autotest_common.sh@10 -- $ set +x 00:18:17.984 ************************************ 00:18:17.984 START TEST ubsan 00:18:17.984 ************************************ 00:18:17.984 using ubsan 00:18:17.984 13:29:32 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:18:17.984 00:18:17.984 real 0m0.000s 00:18:17.984 user 0m0.000s 00:18:17.984 sys 0m0.000s 00:18:17.984 13:29:32 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:18:17.984 13:29:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:18:17.984 ************************************ 00:18:17.984 END TEST ubsan 00:18:17.984 ************************************ 00:18:17.984 13:29:32 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:18:17.984 13:29:32 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:18:17.984 13:29:32 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:18:17.984 13:29:32 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:18:17.984 13:29:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:18:17.984 13:29:32 -- common/autotest_common.sh@10 -- $ set +x 00:18:18.243 ************************************ 00:18:18.243 START TEST build_native_dpdk 00:18:18.243 ************************************ 00:18:18.243 13:29:32 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:18:18.243 6dad0bb5c8 event/cnxk: fix getwork write data on reconfig 00:18:18.243 b74f298f9b test/event: fix device stop 00:18:18.243 34e3ad3a1e eventdev: remove single event enqueue and dequeue 00:18:18.243 5079ede71e event/skeleton: remove single event enqueue and dequeue 00:18:18.243 a83fc0f4e1 event/cnxk: remove single event enqueue and dequeue 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc1 00:18:18.243 13:29:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.11.0-rc1 21.11.0 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc1 '<' 21.11.0 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:18:18.244 patching file config/rte_config.h 00:18:18.244 Hunk #1 succeeded at 71 (offset 12 lines). 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc1 24.07.0 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc1 '<' 24.07.0 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 24.11.0-rc1 24.07.0 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc1 '>=' 24.07.0 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:18:18.244 13:29:32 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:18:18.244 patching file drivers/bus/pci/linux/pci_uio.c 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:18:18.244 13:29:32 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:18:23.538 The Meson build system 00:18:23.538 Version: 1.5.0 00:18:23.538 Source dir: /home/vagrant/spdk_repo/dpdk 00:18:23.538 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:18:23.538 Build type: native build 00:18:23.538 Project name: DPDK 00:18:23.538 Project version: 24.11.0-rc1 00:18:23.538 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:18:23.538 C linker for the host machine: gcc ld.bfd 2.40-14 00:18:23.538 Host machine cpu family: x86_64 00:18:23.538 Host machine cpu: x86_64 00:18:23.538 Message: ## Building in Developer Mode ## 00:18:23.538 Program pkg-config found: YES (/usr/bin/pkg-config) 00:18:23.538 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:18:23.538 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:18:23.538 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:18:23.538 Program cat found: YES (/usr/bin/cat) 00:18:23.538 config/meson.build:119: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:18:23.538 Compiler for C supports arguments -march=native: YES 00:18:23.538 Checking for size of "void *" : 8 00:18:23.538 Checking for size of "void *" : 8 (cached) 00:18:23.538 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:18:23.538 Library m found: YES 00:18:23.538 Library numa found: YES 00:18:23.538 Has header "numaif.h" : YES 00:18:23.538 Library fdt found: NO 00:18:23.538 Library execinfo found: NO 00:18:23.538 Has header "execinfo.h" : YES 00:18:23.538 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:18:23.538 Run-time dependency libarchive found: NO (tried pkgconfig) 00:18:23.538 Run-time dependency libbsd found: NO (tried pkgconfig) 00:18:23.538 Run-time dependency jansson found: NO (tried pkgconfig) 00:18:23.538 Run-time dependency openssl found: YES 3.1.1 00:18:23.538 Run-time dependency libpcap found: YES 1.10.4 00:18:23.538 Has header "pcap.h" with dependency libpcap: YES 00:18:23.538 Compiler for C supports arguments -Wcast-qual: YES 00:18:23.538 Compiler for C supports arguments -Wdeprecated: YES 00:18:23.538 Compiler for C supports arguments -Wformat: YES 00:18:23.538 Compiler for C supports arguments -Wformat-nonliteral: NO 00:18:23.538 Compiler for C supports arguments -Wformat-security: NO 00:18:23.538 Compiler for C supports arguments -Wmissing-declarations: YES 00:18:23.538 Compiler for C supports arguments -Wmissing-prototypes: YES 00:18:23.538 Compiler for C supports arguments -Wnested-externs: YES 00:18:23.538 Compiler for C supports arguments -Wold-style-definition: YES 00:18:23.538 Compiler for C supports arguments -Wpointer-arith: YES 00:18:23.538 Compiler for C supports arguments -Wsign-compare: YES 00:18:23.538 Compiler for C supports arguments -Wstrict-prototypes: YES 00:18:23.538 Compiler for C supports arguments -Wundef: YES 00:18:23.538 Compiler for C supports arguments -Wwrite-strings: YES 00:18:23.538 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:18:23.538 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:18:23.539 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:18:23.539 Program objdump found: YES (/usr/bin/objdump) 00:18:23.539 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:18:23.539 Checking if "AVX512 checking" compiles: YES 00:18:23.539 Fetching value of define "__AVX512F__" : (undefined) 00:18:23.539 Fetching value of define "__SSE4_2__" : 1 00:18:23.539 Fetching value of define "__AES__" : 1 00:18:23.539 Fetching value of define "__AVX__" : 1 00:18:23.539 Fetching value of define "__AVX2__" : 1 00:18:23.539 Fetching value of define "__AVX512BW__" : (undefined) 00:18:23.539 Fetching value of define "__AVX512CD__" : (undefined) 00:18:23.539 Fetching value of define "__AVX512DQ__" : (undefined) 00:18:23.539 Fetching value of define "__AVX512F__" : (undefined) 00:18:23.539 Fetching value of define "__AVX512VL__" : (undefined) 00:18:23.539 Fetching value of define "__PCLMUL__" : 1 00:18:23.539 Fetching value of define "__RDRND__" : 1 00:18:23.539 Fetching value of define "__RDSEED__" : 1 00:18:23.539 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:18:23.539 Compiler for C supports arguments -Wno-format-truncation: YES 00:18:23.539 Message: lib/log: Defining dependency "log" 00:18:23.539 Message: lib/kvargs: Defining dependency "kvargs" 00:18:23.539 Message: lib/argparse: Defining dependency "argparse" 00:18:23.539 Message: lib/telemetry: Defining dependency "telemetry" 00:18:23.539 Checking for function "getentropy" : NO 00:18:23.539 Message: lib/eal: Defining dependency "eal" 00:18:23.539 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:18:23.539 Message: lib/ring: Defining dependency "ring" 00:18:23.539 Message: lib/rcu: Defining dependency "rcu" 00:18:23.539 Message: lib/mempool: Defining dependency "mempool" 00:18:23.539 Message: lib/mbuf: Defining dependency "mbuf" 00:18:23.539 Fetching value of define "__PCLMUL__" : 1 (cached) 00:18:23.539 Compiler for C supports arguments -mpclmul: YES 00:18:23.539 Compiler for C supports arguments -maes: YES 00:18:23.539 Compiler for C supports arguments -mvpclmulqdq: YES 00:18:23.539 Message: lib/net: Defining dependency "net" 00:18:23.539 Message: lib/meter: Defining dependency "meter" 00:18:23.539 Message: lib/ethdev: Defining dependency "ethdev" 00:18:23.539 Message: lib/pci: Defining dependency "pci" 00:18:23.539 Message: lib/cmdline: Defining dependency "cmdline" 00:18:23.539 Message: lib/metrics: Defining dependency "metrics" 00:18:23.539 Message: lib/hash: Defining dependency "hash" 00:18:23.539 Message: lib/timer: Defining dependency "timer" 00:18:23.539 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:18:23.539 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:18:23.539 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:18:23.539 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:18:23.539 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:18:23.539 Message: lib/acl: Defining dependency "acl" 00:18:23.539 Message: lib/bbdev: Defining dependency "bbdev" 00:18:23.539 Message: lib/bitratestats: Defining dependency "bitratestats" 00:18:23.539 Run-time dependency libelf found: YES 0.191 00:18:23.539 Message: lib/bpf: Defining dependency "bpf" 00:18:23.539 Message: lib/cfgfile: Defining dependency "cfgfile" 00:18:23.539 Message: lib/compressdev: Defining dependency "compressdev" 00:18:23.539 Message: lib/cryptodev: Defining dependency "cryptodev" 00:18:23.539 Message: lib/distributor: Defining dependency "distributor" 00:18:23.539 Message: lib/dmadev: Defining dependency "dmadev" 00:18:23.539 Message: lib/efd: Defining dependency "efd" 00:18:23.539 Message: lib/eventdev: Defining dependency "eventdev" 00:18:23.539 Message: lib/dispatcher: Defining dependency "dispatcher" 00:18:23.539 Message: lib/gpudev: Defining dependency "gpudev" 00:18:23.539 Message: lib/gro: Defining dependency "gro" 00:18:23.539 Message: lib/gso: Defining dependency "gso" 00:18:23.539 Message: lib/ip_frag: Defining dependency "ip_frag" 00:18:23.539 Message: lib/jobstats: Defining dependency "jobstats" 00:18:23.539 Message: lib/latencystats: Defining dependency "latencystats" 00:18:23.539 Message: lib/lpm: Defining dependency "lpm" 00:18:23.539 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:18:23.539 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:18:23.539 Fetching value of define "__AVX512IFMA__" : (undefined) 00:18:23.539 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:18:23.539 Message: lib/member: Defining dependency "member" 00:18:23.539 Message: lib/pcapng: Defining dependency "pcapng" 00:18:23.539 Message: lib/power: Defining dependency "power" 00:18:23.539 Message: lib/rawdev: Defining dependency "rawdev" 00:18:23.539 Message: lib/regexdev: Defining dependency "regexdev" 00:18:23.539 Message: lib/mldev: Defining dependency "mldev" 00:18:23.539 Message: lib/rib: Defining dependency "rib" 00:18:23.539 Message: lib/reorder: Defining dependency "reorder" 00:18:23.539 Message: lib/sched: Defining dependency "sched" 00:18:23.539 Message: lib/security: Defining dependency "security" 00:18:23.539 Message: lib/stack: Defining dependency "stack" 00:18:23.539 Has header "linux/userfaultfd.h" : YES 00:18:23.539 Has header "linux/vduse.h" : YES 00:18:23.539 Message: lib/vhost: Defining dependency "vhost" 00:18:23.539 Message: lib/ipsec: Defining dependency "ipsec" 00:18:23.539 Message: lib/pdcp: Defining dependency "pdcp" 00:18:23.539 Message: lib/fib: Defining dependency "fib" 00:18:23.539 Message: lib/port: Defining dependency "port" 00:18:23.539 Message: lib/pdump: Defining dependency "pdump" 00:18:23.539 Message: lib/table: Defining dependency "table" 00:18:23.539 Message: lib/pipeline: Defining dependency "pipeline" 00:18:23.539 Message: lib/graph: Defining dependency "graph" 00:18:23.539 Message: lib/node: Defining dependency "node" 00:18:23.539 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:18:23.539 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:18:23.539 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:18:23.539 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:18:23.539 Compiler for C supports arguments -Wno-sign-compare: YES 00:18:23.539 Compiler for C supports arguments -Wno-unused-value: YES 00:18:23.539 Compiler for C supports arguments -Wno-format: YES 00:18:23.539 Compiler for C supports arguments -Wno-format-security: YES 00:18:23.539 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:18:23.539 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:18:23.539 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:18:23.539 Compiler for C supports arguments -Wno-unused-parameter: YES 00:18:23.539 Compiler for C supports arguments -march=skylake-avx512: YES 00:18:25.442 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:18:25.442 Has header "sys/epoll.h" : YES 00:18:25.442 Program doxygen found: YES (/usr/local/bin/doxygen) 00:18:25.442 Configuring doxy-api-html.conf using configuration 00:18:25.442 doc/api/meson.build:54: WARNING: The variable(s) 'DTS_API_MAIN_PAGE' in the input file 'doc/api/doxy-api.conf.in' are not present in the given configuration data. 00:18:25.442 Configuring doxy-api-man.conf using configuration 00:18:25.442 doc/api/meson.build:67: WARNING: The variable(s) 'DTS_API_MAIN_PAGE' in the input file 'doc/api/doxy-api.conf.in' are not present in the given configuration data. 00:18:25.442 Program mandb found: YES (/usr/bin/mandb) 00:18:25.442 Program sphinx-build found: NO 00:18:25.442 Program sphinx-build found: NO 00:18:25.443 Configuring rte_build_config.h using configuration 00:18:25.443 Message: 00:18:25.443 ================= 00:18:25.443 Applications Enabled 00:18:25.443 ================= 00:18:25.443 00:18:25.443 apps: 00:18:25.443 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:18:25.443 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:18:25.443 test-pmd, test-regex, test-sad, test-security-perf, 00:18:25.443 00:18:25.443 Message: 00:18:25.443 ================= 00:18:25.443 Libraries Enabled 00:18:25.443 ================= 00:18:25.443 00:18:25.443 libs: 00:18:25.443 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:18:25.443 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:18:25.443 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:18:25.443 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:18:25.443 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:18:25.443 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:18:25.443 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:18:25.443 graph, node, 00:18:25.443 00:18:25.443 Message: 00:18:25.443 =============== 00:18:25.443 Drivers Enabled 00:18:25.443 =============== 00:18:25.443 00:18:25.443 common: 00:18:25.443 00:18:25.443 bus: 00:18:25.443 pci, vdev, 00:18:25.443 mempool: 00:18:25.443 ring, 00:18:25.443 dma: 00:18:25.443 00:18:25.443 net: 00:18:25.443 i40e, 00:18:25.443 raw: 00:18:25.443 00:18:25.443 crypto: 00:18:25.443 00:18:25.443 compress: 00:18:25.443 00:18:25.443 regex: 00:18:25.443 00:18:25.443 ml: 00:18:25.443 00:18:25.443 vdpa: 00:18:25.443 00:18:25.443 event: 00:18:25.443 00:18:25.443 baseband: 00:18:25.443 00:18:25.443 gpu: 00:18:25.443 00:18:25.443 00:18:25.443 Message: 00:18:25.443 ================= 00:18:25.443 Content Skipped 00:18:25.443 ================= 00:18:25.443 00:18:25.443 apps: 00:18:25.443 00:18:25.443 libs: 00:18:25.443 00:18:25.443 drivers: 00:18:25.443 common/cpt: not in enabled drivers build config 00:18:25.443 common/dpaax: not in enabled drivers build config 00:18:25.443 common/iavf: not in enabled drivers build config 00:18:25.443 common/idpf: not in enabled drivers build config 00:18:25.443 common/ionic: not in enabled drivers build config 00:18:25.443 common/mvep: not in enabled drivers build config 00:18:25.443 common/octeontx: not in enabled drivers build config 00:18:25.443 bus/auxiliary: not in enabled drivers build config 00:18:25.443 bus/cdx: not in enabled drivers build config 00:18:25.443 bus/dpaa: not in enabled drivers build config 00:18:25.443 bus/fslmc: not in enabled drivers build config 00:18:25.443 bus/ifpga: not in enabled drivers build config 00:18:25.443 bus/platform: not in enabled drivers build config 00:18:25.443 bus/uacce: not in enabled drivers build config 00:18:25.443 bus/vmbus: not in enabled drivers build config 00:18:25.443 common/cnxk: not in enabled drivers build config 00:18:25.443 common/mlx5: not in enabled drivers build config 00:18:25.443 common/nfp: not in enabled drivers build config 00:18:25.443 common/nitrox: not in enabled drivers build config 00:18:25.443 common/qat: not in enabled drivers build config 00:18:25.443 common/sfc_efx: not in enabled drivers build config 00:18:25.443 mempool/bucket: not in enabled drivers build config 00:18:25.443 mempool/cnxk: not in enabled drivers build config 00:18:25.443 mempool/dpaa: not in enabled drivers build config 00:18:25.443 mempool/dpaa2: not in enabled drivers build config 00:18:25.443 mempool/octeontx: not in enabled drivers build config 00:18:25.443 mempool/stack: not in enabled drivers build config 00:18:25.443 dma/cnxk: not in enabled drivers build config 00:18:25.443 dma/dpaa: not in enabled drivers build config 00:18:25.443 dma/dpaa2: not in enabled drivers build config 00:18:25.443 dma/hisilicon: not in enabled drivers build config 00:18:25.443 dma/idxd: not in enabled drivers build config 00:18:25.443 dma/ioat: not in enabled drivers build config 00:18:25.443 dma/odm: not in enabled drivers build config 00:18:25.443 dma/skeleton: not in enabled drivers build config 00:18:25.443 net/af_packet: not in enabled drivers build config 00:18:25.443 net/af_xdp: not in enabled drivers build config 00:18:25.443 net/ark: not in enabled drivers build config 00:18:25.443 net/atlantic: not in enabled drivers build config 00:18:25.443 net/avp: not in enabled drivers build config 00:18:25.443 net/axgbe: not in enabled drivers build config 00:18:25.443 net/bnx2x: not in enabled drivers build config 00:18:25.443 net/bnxt: not in enabled drivers build config 00:18:25.443 net/bonding: not in enabled drivers build config 00:18:25.443 net/cnxk: not in enabled drivers build config 00:18:25.443 net/cpfl: not in enabled drivers build config 00:18:25.443 net/cxgbe: not in enabled drivers build config 00:18:25.443 net/dpaa: not in enabled drivers build config 00:18:25.443 net/dpaa2: not in enabled drivers build config 00:18:25.443 net/e1000: not in enabled drivers build config 00:18:25.443 net/ena: not in enabled drivers build config 00:18:25.443 net/enetc: not in enabled drivers build config 00:18:25.443 net/enetfec: not in enabled drivers build config 00:18:25.443 net/enic: not in enabled drivers build config 00:18:25.443 net/failsafe: not in enabled drivers build config 00:18:25.443 net/fm10k: not in enabled drivers build config 00:18:25.443 net/gve: not in enabled drivers build config 00:18:25.443 net/hinic: not in enabled drivers build config 00:18:25.443 net/hns3: not in enabled drivers build config 00:18:25.443 net/iavf: not in enabled drivers build config 00:18:25.443 net/ice: not in enabled drivers build config 00:18:25.443 net/idpf: not in enabled drivers build config 00:18:25.443 net/igc: not in enabled drivers build config 00:18:25.443 net/ionic: not in enabled drivers build config 00:18:25.443 net/ipn3ke: not in enabled drivers build config 00:18:25.443 net/ixgbe: not in enabled drivers build config 00:18:25.443 net/mana: not in enabled drivers build config 00:18:25.443 net/memif: not in enabled drivers build config 00:18:25.443 net/mlx4: not in enabled drivers build config 00:18:25.443 net/mlx5: not in enabled drivers build config 00:18:25.443 net/mvneta: not in enabled drivers build config 00:18:25.443 net/mvpp2: not in enabled drivers build config 00:18:25.443 net/netvsc: not in enabled drivers build config 00:18:25.443 net/nfb: not in enabled drivers build config 00:18:25.443 net/nfp: not in enabled drivers build config 00:18:25.443 net/ngbe: not in enabled drivers build config 00:18:25.443 net/ntnic: not in enabled drivers build config 00:18:25.443 net/null: not in enabled drivers build config 00:18:25.443 net/octeontx: not in enabled drivers build config 00:18:25.443 net/octeon_ep: not in enabled drivers build config 00:18:25.443 net/pcap: not in enabled drivers build config 00:18:25.443 net/pfe: not in enabled drivers build config 00:18:25.443 net/qede: not in enabled drivers build config 00:18:25.443 net/ring: not in enabled drivers build config 00:18:25.443 net/sfc: not in enabled drivers build config 00:18:25.443 net/softnic: not in enabled drivers build config 00:18:25.443 net/tap: not in enabled drivers build config 00:18:25.443 net/thunderx: not in enabled drivers build config 00:18:25.443 net/txgbe: not in enabled drivers build config 00:18:25.443 net/vdev_netvsc: not in enabled drivers build config 00:18:25.443 net/vhost: not in enabled drivers build config 00:18:25.443 net/virtio: not in enabled drivers build config 00:18:25.443 net/vmxnet3: not in enabled drivers build config 00:18:25.443 raw/cnxk_bphy: not in enabled drivers build config 00:18:25.443 raw/cnxk_gpio: not in enabled drivers build config 00:18:25.443 raw/dpaa2_cmdif: not in enabled drivers build config 00:18:25.443 raw/ifpga: not in enabled drivers build config 00:18:25.443 raw/ntb: not in enabled drivers build config 00:18:25.443 raw/skeleton: not in enabled drivers build config 00:18:25.443 crypto/armv8: not in enabled drivers build config 00:18:25.443 crypto/bcmfs: not in enabled drivers build config 00:18:25.443 crypto/caam_jr: not in enabled drivers build config 00:18:25.443 crypto/ccp: not in enabled drivers build config 00:18:25.443 crypto/cnxk: not in enabled drivers build config 00:18:25.443 crypto/dpaa_sec: not in enabled drivers build config 00:18:25.443 crypto/dpaa2_sec: not in enabled drivers build config 00:18:25.443 crypto/ionic: not in enabled drivers build config 00:18:25.443 crypto/ipsec_mb: not in enabled drivers build config 00:18:25.443 crypto/mlx5: not in enabled drivers build config 00:18:25.443 crypto/mvsam: not in enabled drivers build config 00:18:25.443 crypto/nitrox: not in enabled drivers build config 00:18:25.443 crypto/null: not in enabled drivers build config 00:18:25.443 crypto/octeontx: not in enabled drivers build config 00:18:25.443 crypto/openssl: not in enabled drivers build config 00:18:25.443 crypto/scheduler: not in enabled drivers build config 00:18:25.443 crypto/uadk: not in enabled drivers build config 00:18:25.443 crypto/virtio: not in enabled drivers build config 00:18:25.443 compress/isal: not in enabled drivers build config 00:18:25.443 compress/mlx5: not in enabled drivers build config 00:18:25.443 compress/nitrox: not in enabled drivers build config 00:18:25.443 compress/octeontx: not in enabled drivers build config 00:18:25.443 compress/uadk: not in enabled drivers build config 00:18:25.443 compress/zlib: not in enabled drivers build config 00:18:25.443 regex/mlx5: not in enabled drivers build config 00:18:25.443 regex/cn9k: not in enabled drivers build config 00:18:25.443 ml/cnxk: not in enabled drivers build config 00:18:25.443 vdpa/ifc: not in enabled drivers build config 00:18:25.443 vdpa/mlx5: not in enabled drivers build config 00:18:25.443 vdpa/nfp: not in enabled drivers build config 00:18:25.443 vdpa/sfc: not in enabled drivers build config 00:18:25.443 event/cnxk: not in enabled drivers build config 00:18:25.443 event/dlb2: not in enabled drivers build config 00:18:25.443 event/dpaa: not in enabled drivers build config 00:18:25.443 event/dpaa2: not in enabled drivers build config 00:18:25.443 event/dsw: not in enabled drivers build config 00:18:25.443 event/opdl: not in enabled drivers build config 00:18:25.443 event/skeleton: not in enabled drivers build config 00:18:25.443 event/sw: not in enabled drivers build config 00:18:25.443 event/octeontx: not in enabled drivers build config 00:18:25.443 baseband/acc: not in enabled drivers build config 00:18:25.443 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:18:25.443 baseband/fpga_lte_fec: not in enabled drivers build config 00:18:25.443 baseband/la12xx: not in enabled drivers build config 00:18:25.444 baseband/null: not in enabled drivers build config 00:18:25.444 baseband/turbo_sw: not in enabled drivers build config 00:18:25.444 gpu/cuda: not in enabled drivers build config 00:18:25.444 00:18:25.444 00:18:25.444 Build targets in project: 224 00:18:25.444 00:18:25.444 DPDK 24.11.0-rc1 00:18:25.444 00:18:25.444 User defined options 00:18:25.444 libdir : lib 00:18:25.444 prefix : /home/vagrant/spdk_repo/dpdk/build 00:18:25.444 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:18:25.444 c_link_args : 00:18:25.444 enable_docs : false 00:18:25.444 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:18:25.444 enable_kmods : false 00:18:25.444 machine : native 00:18:25.444 tests : false 00:18:25.444 00:18:25.444 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:18:25.444 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:18:25.444 13:29:39 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:18:25.444 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:18:25.444 [1/724] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:18:25.701 [2/724] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:18:25.701 [3/724] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:18:25.701 [4/724] Linking static target lib/librte_kvargs.a 00:18:25.701 [5/724] Compiling C object lib/librte_log.a.p/log_log.c.o 00:18:25.701 [6/724] Linking static target lib/librte_log.a 00:18:25.701 [7/724] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:18:25.701 [8/724] Linking static target lib/librte_argparse.a 00:18:25.958 [9/724] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:18:25.958 [10/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:18:25.958 [11/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:18:25.958 [12/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:18:25.958 [13/724] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:18:26.215 [14/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:18:26.215 [15/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:18:26.215 [16/724] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:18:26.215 [17/724] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:18:26.215 [18/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:18:26.215 [19/724] Linking target lib/librte_log.so.25.0 00:18:26.473 [20/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:18:26.473 [21/724] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:18:26.473 [22/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:18:26.473 [23/724] Linking target lib/librte_kvargs.so.25.0 00:18:26.473 [24/724] Linking target lib/librte_argparse.so.25.0 00:18:26.732 [25/724] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:18:26.732 [26/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:18:26.732 [27/724] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:18:26.732 [28/724] Linking static target lib/librte_telemetry.a 00:18:26.732 [29/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:18:26.732 [30/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:18:26.990 [31/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:18:26.990 [32/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:18:26.990 [33/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:18:26.990 [34/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:18:27.247 [35/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:18:27.247 [36/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:18:27.247 [37/724] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:18:27.504 [38/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:18:27.504 [39/724] Linking target lib/librte_telemetry.so.25.0 00:18:27.504 [40/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:18:27.504 [41/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:18:27.504 [42/724] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:18:27.504 [43/724] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:18:27.504 [44/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:18:27.504 [45/724] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:18:27.785 [46/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:18:27.785 [47/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:18:27.785 [48/724] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:18:27.785 [49/724] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:18:28.047 [50/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:18:28.047 [51/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:18:28.305 [52/724] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:18:28.305 [53/724] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:18:28.563 [54/724] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:18:28.563 [55/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:18:28.563 [56/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:18:28.563 [57/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:18:28.563 [58/724] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:18:28.821 [59/724] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:18:28.821 [60/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:18:29.079 [61/724] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:18:29.079 [62/724] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:18:29.079 [63/724] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:18:29.079 [64/724] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:18:29.079 [65/724] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:18:29.079 [66/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:18:29.336 [67/724] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:18:29.336 [68/724] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:18:29.336 [69/724] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:18:29.336 [70/724] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:18:29.595 [71/724] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:18:29.595 [72/724] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:18:29.853 [73/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:18:29.853 [74/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:18:29.853 [75/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:18:30.111 [76/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:18:30.111 [77/724] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:18:30.111 [78/724] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:18:30.111 [79/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:18:30.111 [80/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:18:30.369 [81/724] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:18:30.369 [82/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:18:30.369 [83/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:18:30.369 [84/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:18:30.369 [85/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:18:30.627 [86/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:18:30.627 [87/724] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:18:30.627 [88/724] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:18:30.885 [89/724] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:18:30.885 [90/724] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:18:30.885 [91/724] Linking static target lib/librte_ring.a 00:18:31.143 [92/724] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:18:31.143 [93/724] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:18:31.143 [94/724] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:18:31.401 [95/724] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:18:31.401 [96/724] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:18:31.401 [97/724] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:18:31.401 [98/724] Linking static target lib/librte_eal.a 00:18:31.401 [99/724] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:18:31.401 [100/724] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:18:31.659 [101/724] Linking static target lib/librte_mempool.a 00:18:31.659 [102/724] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:18:31.659 [103/724] Linking static target lib/librte_rcu.a 00:18:31.920 [104/724] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:18:31.920 [105/724] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:18:31.920 [106/724] Linking static target lib/net/libnet_crc_avx512_lib.a 00:18:31.920 [107/724] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:18:31.920 [108/724] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:18:32.177 [109/724] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:18:32.177 [110/724] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:18:32.177 [111/724] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:18:32.177 [112/724] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:18:32.177 [113/724] Linking static target lib/librte_mbuf.a 00:18:32.177 [114/724] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:18:32.742 [115/724] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:18:32.743 [116/724] Linking static target lib/librte_net.a 00:18:32.743 [117/724] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:18:32.743 [118/724] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:18:32.743 [119/724] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:18:33.000 [120/724] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:18:33.000 [121/724] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:18:33.000 [122/724] Linking static target lib/librte_meter.a 00:18:33.000 [123/724] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:18:33.000 [124/724] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:18:33.258 [125/724] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:18:33.824 [126/724] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:18:33.824 [127/724] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:18:34.081 [128/724] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:18:34.081 [129/724] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:18:34.339 [130/724] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:18:34.596 [131/724] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:18:34.596 [132/724] Linking static target lib/librte_pci.a 00:18:34.596 [133/724] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:18:34.596 [134/724] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:18:34.596 [135/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:18:34.854 [136/724] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:18:34.854 [137/724] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:18:34.854 [138/724] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:18:34.854 [139/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:18:34.854 [140/724] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:18:34.854 [141/724] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:18:34.854 [142/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:18:35.111 [143/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:18:35.111 [144/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:18:35.111 [145/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:18:35.111 [146/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:18:35.111 [147/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:18:35.111 [148/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:18:35.367 [149/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:18:35.367 [150/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:18:35.705 [151/724] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:18:35.963 [152/724] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:18:35.963 [153/724] Linking static target lib/librte_cmdline.a 00:18:35.963 [154/724] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:18:35.963 [155/724] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:18:36.220 [156/724] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:18:36.220 [157/724] Linking static target lib/librte_metrics.a 00:18:36.220 [158/724] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:18:36.220 [159/724] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:18:36.478 [160/724] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:18:36.737 [161/724] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:18:36.995 [162/724] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:18:37.563 [163/724] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:18:37.563 [164/724] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:18:37.563 [165/724] Linking static target lib/librte_timer.a 00:18:37.563 [166/724] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:18:37.821 [167/724] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:18:38.079 [168/724] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:18:38.079 [169/724] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:18:38.336 [170/724] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:18:38.903 [171/724] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:18:38.903 [172/724] Linking static target lib/librte_bitratestats.a 00:18:38.903 [173/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:18:38.903 [174/724] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:18:38.903 [175/724] Linking static target lib/librte_hash.a 00:18:38.903 [176/724] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:18:38.903 [177/724] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:18:38.903 [178/724] Linking static target lib/librte_bbdev.a 00:18:39.162 [179/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:18:39.162 [180/724] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:18:39.162 [181/724] Linking static target lib/acl/libavx2_tmp.a 00:18:39.162 [182/724] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:18:39.162 [183/724] Linking static target lib/librte_ethdev.a 00:18:39.476 [184/724] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:18:39.476 [185/724] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:18:39.476 [186/724] Linking static target lib/acl/libavx512_tmp.a 00:18:39.476 [187/724] Linking target lib/librte_eal.so.25.0 00:18:39.476 [188/724] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:39.476 [189/724] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:18:39.735 [190/724] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:18:39.735 [191/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:18:39.735 [192/724] Linking target lib/librte_ring.so.25.0 00:18:39.735 [193/724] Linking target lib/librte_meter.so.25.0 00:18:39.735 [194/724] Linking target lib/librte_pci.so.25.0 00:18:39.735 [195/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:18:39.735 [196/724] Linking target lib/librte_timer.so.25.0 00:18:39.735 [197/724] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:18:39.735 [198/724] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:18:39.735 [199/724] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:18:39.735 [200/724] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:18:39.735 [201/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:18:39.735 [202/724] Linking target lib/librte_rcu.so.25.0 00:18:39.735 [203/724] Linking target lib/librte_mempool.so.25.0 00:18:39.735 [204/724] Linking static target lib/librte_acl.a 00:18:39.735 [205/724] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:18:39.993 [206/724] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:18:39.993 [207/724] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:18:39.993 [208/724] Linking target lib/librte_mbuf.so.25.0 00:18:39.993 [209/724] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:18:39.993 [210/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:18:39.993 [211/724] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:18:40.253 [212/724] Linking target lib/librte_net.so.25.0 00:18:40.253 [213/724] Linking target lib/librte_bbdev.so.25.0 00:18:40.253 [214/724] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:18:40.253 [215/724] Linking static target lib/librte_cfgfile.a 00:18:40.253 [216/724] Linking target lib/librte_acl.so.25.0 00:18:40.253 [217/724] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:18:40.253 [218/724] Linking target lib/librte_cmdline.so.25.0 00:18:40.253 [219/724] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:18:40.253 [220/724] Linking target lib/librte_hash.so.25.0 00:18:40.512 [221/724] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:18:40.512 [222/724] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:18:40.512 [223/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:18:40.512 [224/724] Linking target lib/librte_cfgfile.so.25.0 00:18:40.512 [225/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:18:40.770 [226/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:18:40.770 [227/724] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:18:40.770 [228/724] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:18:41.029 [229/724] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:18:41.029 [230/724] Linking static target lib/librte_bpf.a 00:18:41.029 [231/724] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:18:41.029 [232/724] Linking static target lib/librte_compressdev.a 00:18:41.287 [233/724] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:18:41.287 [234/724] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:18:41.545 [235/724] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:18:41.545 [236/724] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:18:41.545 [237/724] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:18:41.832 [238/724] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:41.832 [239/724] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:18:41.832 [240/724] Linking static target lib/librte_distributor.a 00:18:41.832 [241/724] Linking target lib/librte_compressdev.so.25.0 00:18:41.832 [242/724] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:18:42.090 [243/724] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:18:42.090 [244/724] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:18:42.090 [245/724] Linking target lib/librte_distributor.so.25.0 00:18:42.090 [246/724] Linking static target lib/librte_dmadev.a 00:18:42.348 [247/724] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:18:42.607 [248/724] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:18:42.865 [249/724] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:42.865 [250/724] Linking target lib/librte_dmadev.so.25.0 00:18:42.865 [251/724] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:18:43.124 [252/724] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:18:43.124 [253/724] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:18:43.124 [254/724] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:18:43.124 [255/724] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:18:43.124 [256/724] Linking static target lib/librte_efd.a 00:18:43.382 [257/724] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:18:43.382 [258/724] Linking static target lib/librte_cryptodev.a 00:18:43.640 [259/724] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:18:43.640 [260/724] Linking target lib/librte_efd.so.25.0 00:18:43.898 [261/724] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:18:43.898 [262/724] Linking static target lib/librte_dispatcher.a 00:18:43.898 [263/724] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:18:44.154 [264/724] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:18:44.154 [265/724] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:18:44.154 [266/724] Linking static target lib/librte_gpudev.a 00:18:44.412 [267/724] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:18:44.412 [268/724] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:18:44.670 [269/724] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:18:44.670 [270/724] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:18:44.928 [271/724] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:18:44.928 [272/724] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:18:45.186 [273/724] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:45.186 [274/724] Linking target lib/librte_cryptodev.so.25.0 00:18:45.452 [275/724] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:18:45.452 [276/724] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:18:45.452 [277/724] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:18:45.452 [278/724] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:18:45.452 [279/724] Linking static target lib/librte_eventdev.a 00:18:45.452 [280/724] Linking static target lib/librte_gro.a 00:18:45.452 [281/724] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:45.452 [282/724] Linking target lib/librte_gpudev.so.25.0 00:18:45.710 [283/724] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:18:45.710 [284/724] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:18:45.710 [285/724] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:18:45.710 [286/724] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:18:45.710 [287/724] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:45.710 [288/724] Linking target lib/librte_ethdev.so.25.0 00:18:45.967 [289/724] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:18:45.967 [290/724] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:18:45.967 [291/724] Linking target lib/librte_metrics.so.25.0 00:18:45.967 [292/724] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:18:46.226 [293/724] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:18:46.226 [294/724] Linking target lib/librte_bpf.so.25.0 00:18:46.226 [295/724] Linking target lib/librte_bitratestats.so.25.0 00:18:46.226 [296/724] Linking target lib/librte_gro.so.25.0 00:18:46.226 [297/724] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:18:46.226 [298/724] Linking static target lib/librte_gso.a 00:18:46.226 [299/724] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:18:46.483 [300/724] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:18:46.483 [301/724] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:18:46.483 [302/724] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:18:46.483 [303/724] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:18:46.742 [304/724] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:18:46.742 [305/724] Linking target lib/librte_gso.so.25.0 00:18:46.742 [306/724] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:18:47.001 [307/724] Linking static target lib/librte_jobstats.a 00:18:47.001 [308/724] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:18:47.001 [309/724] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:18:47.259 [310/724] Linking static target lib/librte_ip_frag.a 00:18:47.259 [311/724] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:18:47.259 [312/724] Linking target lib/librte_jobstats.so.25.0 00:18:47.259 [313/724] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:18:47.259 [314/724] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:18:47.517 [315/724] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:18:47.517 [316/724] Linking static target lib/member/libsketch_avx512_tmp.a 00:18:47.517 [317/724] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:18:47.517 [318/724] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:18:47.517 [319/724] Linking target lib/librte_ip_frag.so.25.0 00:18:47.517 [320/724] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:18:47.517 [321/724] Linking static target lib/librte_latencystats.a 00:18:47.517 [322/724] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:18:47.776 [323/724] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:18:47.776 [324/724] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:18:47.776 [325/724] Linking static target lib/librte_lpm.a 00:18:47.776 [326/724] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:18:47.776 [327/724] Linking target lib/librte_latencystats.so.25.0 00:18:47.776 [328/724] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:18:47.776 [329/724] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:48.036 [330/724] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:18:48.036 [331/724] Linking target lib/librte_eventdev.so.25.0 00:18:48.036 [332/724] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:18:48.036 [333/724] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:18:48.036 [334/724] Linking target lib/librte_lpm.so.25.0 00:18:48.036 [335/724] Linking target lib/librte_dispatcher.so.25.0 00:18:48.293 [336/724] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:18:48.294 [337/724] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:18:48.294 [338/724] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:18:48.294 [339/724] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:18:48.294 [340/724] Linking static target lib/librte_pcapng.a 00:18:48.552 [341/724] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:18:48.552 [342/724] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:18:48.552 [343/724] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:18:48.552 [344/724] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:18:48.552 [345/724] Linking target lib/librte_pcapng.so.25.0 00:18:48.811 [346/724] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:18:48.811 [347/724] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:18:48.811 [348/724] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:18:48.811 [349/724] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:18:49.068 [350/724] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:18:49.068 [351/724] Linking static target lib/librte_power.a 00:18:49.068 [352/724] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:18:49.068 [353/724] Linking static target lib/librte_rawdev.a 00:18:49.068 [354/724] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:18:49.328 [355/724] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:18:49.328 [356/724] Linking static target lib/librte_regexdev.a 00:18:49.328 [357/724] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:18:49.328 [358/724] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:18:49.585 [359/724] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:18:49.585 [360/724] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:18:49.585 [361/724] Linking static target lib/librte_member.a 00:18:49.585 [362/724] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:49.585 [363/724] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:18:49.843 [364/724] Linking static target lib/librte_mldev.a 00:18:49.843 [365/724] Linking target lib/librte_rawdev.so.25.0 00:18:49.843 [366/724] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:18:49.843 [367/724] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:18:49.843 [368/724] Linking target lib/librte_power.so.25.0 00:18:49.843 [369/724] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:18:49.843 [370/724] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:18:50.101 [371/724] Linking target lib/librte_member.so.25.0 00:18:50.101 [372/724] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:18:50.101 [373/724] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:50.101 [374/724] Linking target lib/librte_regexdev.so.25.0 00:18:50.101 [375/724] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:18:50.101 [376/724] Linking static target lib/librte_rib.a 00:18:50.358 [377/724] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:18:50.358 [378/724] Linking static target lib/librte_reorder.a 00:18:50.358 [379/724] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:18:50.616 [380/724] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:18:50.616 [381/724] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:18:50.616 [382/724] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:18:50.616 [383/724] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:18:50.616 [384/724] Linking static target lib/librte_stack.a 00:18:50.616 [385/724] Linking target lib/librte_reorder.so.25.0 00:18:50.616 [386/724] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:18:50.875 [387/724] Linking target lib/librte_rib.so.25.0 00:18:50.875 [388/724] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:18:50.875 [389/724] Linking static target lib/librte_security.a 00:18:50.875 [390/724] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:18:50.875 [391/724] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:18:50.875 [392/724] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:18:50.875 [393/724] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:18:50.875 [394/724] Linking target lib/librte_stack.so.25.0 00:18:51.138 [395/724] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:18:51.409 [396/724] Linking target lib/librte_security.so.25.0 00:18:51.409 [397/724] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:51.409 [398/724] Linking target lib/librte_mldev.so.25.0 00:18:51.409 [399/724] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:18:51.409 [400/724] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:18:51.667 [401/724] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:18:51.667 [402/724] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:18:51.667 [403/724] Linking static target lib/librte_sched.a 00:18:51.667 [404/724] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:18:52.234 [405/724] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:18:52.234 [406/724] Linking target lib/librte_sched.so.25.0 00:18:52.234 [407/724] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:18:52.234 [408/724] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:18:52.492 [409/724] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:18:52.751 [410/724] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:18:52.751 [411/724] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:18:53.009 [412/724] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:18:53.009 [413/724] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:18:53.267 [414/724] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:18:53.267 [415/724] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:18:53.526 [416/724] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:18:53.526 [417/724] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:18:53.526 [418/724] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:18:53.784 [419/724] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:18:53.784 [420/724] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:18:54.042 [421/724] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:18:54.042 [422/724] Linking static target lib/librte_ipsec.a 00:18:54.299 [423/724] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:18:54.299 [424/724] Linking static target lib/fib/libtrie_avx512_tmp.a 00:18:54.299 [425/724] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:18:54.299 [426/724] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:18:54.299 [427/724] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:18:54.299 [428/724] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:18:54.299 [429/724] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:18:54.299 [430/724] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:18:54.299 [431/724] Linking target lib/librte_ipsec.so.25.0 00:18:54.557 [432/724] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:18:54.557 [433/724] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:18:55.124 [434/724] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:18:55.381 [435/724] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:18:55.381 [436/724] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:18:55.381 [437/724] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:18:55.381 [438/724] Linking static target lib/librte_pdcp.a 00:18:55.381 [439/724] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:18:55.381 [440/724] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:18:55.644 [441/724] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:18:55.903 [442/724] Linking target lib/librte_pdcp.so.25.0 00:18:55.903 [443/724] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:18:56.160 [444/724] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:18:56.160 [445/724] Linking static target lib/librte_fib.a 00:18:56.418 [446/724] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:18:56.418 [447/724] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:18:56.418 [448/724] Linking target lib/librte_fib.so.25.0 00:18:56.418 [449/724] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:18:56.418 [450/724] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:18:56.678 [451/724] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:18:56.678 [452/724] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:18:56.678 [453/724] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:18:56.678 [454/724] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:18:57.245 [455/724] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:18:57.245 [456/724] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:18:57.245 [457/724] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:18:57.503 [458/724] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:18:57.503 [459/724] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:18:57.503 [460/724] Linking static target lib/librte_port.a 00:18:57.503 [461/724] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:18:57.503 [462/724] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:18:57.760 [463/724] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:18:57.760 [464/724] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:18:57.760 [465/724] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:18:57.760 [466/724] Linking static target lib/librte_pdump.a 00:18:58.018 [467/724] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:18:58.018 [468/724] Linking target lib/librte_port.so.25.0 00:18:58.018 [469/724] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:18:58.018 [470/724] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:18:58.276 [471/724] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:18:58.276 [472/724] Linking target lib/librte_pdump.so.25.0 00:18:58.276 [473/724] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:18:58.534 [474/724] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:18:58.534 [475/724] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:18:58.534 [476/724] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:18:58.792 [477/724] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:18:58.792 [478/724] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:18:58.792 [479/724] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:18:59.049 [480/724] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:18:59.306 [481/724] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:18:59.306 [482/724] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:18:59.868 [483/724] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:18:59.868 [484/724] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:18:59.868 [485/724] Linking static target lib/librte_table.a 00:19:00.124 [486/724] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:19:00.382 [487/724] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:19:00.382 [488/724] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:19:00.641 [489/724] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:19:00.641 [490/724] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:19:00.641 [491/724] Linking target lib/librte_table.so.25.0 00:19:00.898 [492/724] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:19:00.898 [493/724] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:19:00.898 [494/724] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:19:00.898 [495/724] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:19:01.156 [496/724] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:19:01.156 [497/724] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:19:01.417 [498/724] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:19:01.682 [499/724] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:19:01.682 [500/724] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:19:01.682 [501/724] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:19:01.941 [502/724] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:19:01.941 [503/724] Linking static target lib/librte_graph.a 00:19:01.941 [504/724] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:19:02.507 [505/724] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:19:02.507 [506/724] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:19:02.507 [507/724] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:19:02.765 [508/724] Linking target lib/librte_graph.so.25.0 00:19:02.765 [509/724] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:19:02.765 [510/724] Compiling C object lib/librte_node.a.p/node_null.c.o 00:19:03.024 [511/724] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:19:03.024 [512/724] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:19:03.024 [513/724] Compiling C object lib/librte_node.a.p/node_log.c.o 00:19:03.282 [514/724] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:19:03.282 [515/724] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:19:03.282 [516/724] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:19:03.282 [517/724] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:19:03.685 [518/724] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:19:03.981 [519/724] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:19:03.981 [520/724] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:19:03.981 [521/724] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:19:03.981 [522/724] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:19:03.981 [523/724] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:19:04.237 [524/724] Linking static target lib/librte_node.a 00:19:04.237 [525/724] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:19:04.495 [526/724] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:19:04.495 [527/724] Linking target lib/librte_node.so.25.0 00:19:04.495 [528/724] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:19:04.495 [529/724] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:19:04.495 [530/724] Linking static target drivers/libtmp_rte_bus_pci.a 00:19:04.753 [531/724] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:19:04.753 [532/724] Linking static target drivers/libtmp_rte_bus_vdev.a 00:19:04.753 [533/724] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:19:05.012 [534/724] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:19:05.012 [535/724] Linking static target drivers/librte_bus_pci.a 00:19:05.012 [536/724] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:19:05.012 [537/724] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:19:05.012 [538/724] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:19:05.012 [539/724] Linking static target drivers/librte_bus_vdev.a 00:19:05.270 [540/724] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:19:05.270 [541/724] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:19:05.270 [542/724] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:19:05.270 [543/724] Linking target drivers/librte_bus_vdev.so.25.0 00:19:05.528 [544/724] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:19:05.528 [545/724] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:19:05.528 [546/724] Linking static target drivers/libtmp_rte_mempool_ring.a 00:19:05.528 [547/724] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:19:05.528 [548/724] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:19:05.528 [549/724] Linking target drivers/librte_bus_pci.so.25.0 00:19:05.528 [550/724] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:19:05.528 [551/724] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:19:05.786 [552/724] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:19:05.786 [553/724] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:19:05.786 [554/724] Linking static target drivers/librte_mempool_ring.a 00:19:05.786 [555/724] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:19:05.786 [556/724] Linking target drivers/librte_mempool_ring.so.25.0 00:19:05.786 [557/724] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:19:06.353 [558/724] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:19:06.918 [559/724] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:19:07.177 [560/724] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:19:07.177 [561/724] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:19:07.177 [562/724] Linking static target drivers/net/i40e/base/libi40e_base.a 00:19:07.743 [563/724] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:19:07.743 [564/724] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:19:07.743 [565/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:19:08.001 [566/724] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:19:08.001 [567/724] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:19:08.260 [568/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:19:08.260 [569/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:19:08.518 [570/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:19:08.776 [571/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:19:08.776 [572/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:19:08.776 [573/724] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:19:08.776 [574/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:19:09.710 [575/724] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:19:09.710 [576/724] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:19:09.710 [577/724] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:19:09.969 [578/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:19:10.227 [579/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:19:10.227 [580/724] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:19:10.227 [581/724] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:19:10.227 [582/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:19:10.485 [583/724] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:19:10.485 [584/724] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:19:10.744 [585/724] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:19:10.744 [586/724] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:19:10.744 [587/724] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:19:11.002 [588/724] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:19:11.002 [589/724] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:19:11.259 [590/724] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:19:11.259 [591/724] Linking static target drivers/libtmp_rte_net_i40e.a 00:19:11.259 [592/724] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:19:11.259 [593/724] Linking static target lib/librte_vhost.a 00:19:11.259 [594/724] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:19:11.259 [595/724] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:19:11.259 [596/724] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:19:11.516 [597/724] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:19:11.516 [598/724] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:19:11.516 [599/724] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:19:11.516 [600/724] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:19:11.516 [601/724] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:19:11.516 [602/724] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:19:11.516 [603/724] Linking static target drivers/librte_net_i40e.a 00:19:11.774 [604/724] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:19:12.339 [605/724] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:19:12.339 [606/724] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:19:12.339 [607/724] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:19:12.339 [608/724] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:19:12.339 [609/724] Linking target drivers/librte_net_i40e.so.25.0 00:19:12.339 [610/724] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:19:12.596 [611/724] Linking target lib/librte_vhost.so.25.0 00:19:12.596 [612/724] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:19:12.853 [613/724] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:19:13.110 [614/724] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:19:13.111 [615/724] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:19:13.111 [616/724] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:19:13.368 [617/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:19:13.368 [618/724] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:19:13.368 [619/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:19:13.626 [620/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:19:13.884 [621/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:19:14.143 [622/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:19:14.143 [623/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:19:14.143 [624/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:19:14.143 [625/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:19:14.143 [626/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:19:14.402 [627/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:19:14.402 [628/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:19:14.402 [629/724] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:19:14.402 [630/724] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:19:14.968 [631/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:19:14.968 [632/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:19:15.228 [633/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:19:15.228 [634/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:19:15.228 [635/724] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:19:15.487 [636/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:19:15.746 [637/724] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:19:15.746 [638/724] Linking static target lib/librte_pipeline.a 00:19:16.311 [639/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:19:16.311 [640/724] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:19:16.311 [641/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:19:16.571 [642/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:19:16.571 [643/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:19:16.829 [644/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:19:16.829 [645/724] Linking target app/dpdk-dumpcap 00:19:16.829 [646/724] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:19:17.088 [647/724] Linking target app/dpdk-graph 00:19:17.088 [648/724] Linking target app/dpdk-pdump 00:19:17.088 [649/724] Linking target app/dpdk-proc-info 00:19:17.088 [650/724] Linking target app/dpdk-test-cmdline 00:19:17.367 [651/724] Linking target app/dpdk-test-acl 00:19:17.367 [652/724] Linking target app/dpdk-test-compress-perf 00:19:17.367 [653/724] Linking target app/dpdk-test-crypto-perf 00:19:17.367 [654/724] Linking target app/dpdk-test-dma-perf 00:19:17.625 [655/724] Linking target app/dpdk-test-fib 00:19:17.625 [656/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:19:17.883 [657/724] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:19:17.883 [658/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:19:17.883 [659/724] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:19:17.883 [660/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:19:18.142 [661/724] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:19:18.142 [662/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:19:18.400 [663/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:19:18.400 [664/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:19:18.659 [665/724] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:19:18.659 [666/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:19:18.659 [667/724] Linking target app/dpdk-test-gpudev 00:19:18.659 [668/724] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:19:18.659 [669/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:19:18.916 [670/724] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:19:18.916 [671/724] Linking target app/dpdk-test-eventdev 00:19:19.175 [672/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:19:19.175 [673/724] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:19:19.175 [674/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:19:19.175 [675/724] Linking target lib/librte_pipeline.so.25.0 00:19:19.175 [676/724] Linking target app/dpdk-test-bbdev 00:19:19.175 [677/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:19:19.434 [678/724] Linking target app/dpdk-test-flow-perf 00:19:19.434 [679/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:19:19.692 [680/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:19:19.692 [681/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:19:19.955 [682/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:19:19.955 [683/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:19:19.955 [684/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:19:19.955 [685/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:19:19.955 [686/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:19:20.213 [687/724] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:19:20.472 [688/724] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:19:20.472 [689/724] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:19:20.472 [690/724] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:19:20.731 [691/724] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:19:20.989 [692/724] Linking target app/dpdk-test-pipeline 00:19:20.989 [693/724] Linking target app/dpdk-test-mldev 00:19:20.989 [694/724] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:19:21.247 [695/724] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:19:21.247 [696/724] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:19:21.814 [697/724] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:19:21.814 [698/724] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:19:21.814 [699/724] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:19:21.814 [700/724] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:19:22.072 [701/724] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:19:22.072 [702/724] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:19:22.072 [703/724] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:19:22.330 [704/724] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:19:22.589 [705/724] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:19:22.589 [706/724] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:19:22.847 [707/724] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:19:23.107 [708/724] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:19:23.107 [709/724] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:19:23.674 [710/724] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:19:23.674 [711/724] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:19:23.674 [712/724] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:19:23.674 [713/724] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:19:23.933 [714/724] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:19:23.933 [715/724] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:19:24.192 [716/724] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:19:24.192 [717/724] Linking target app/dpdk-test-sad 00:19:24.192 [718/724] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:19:24.192 [719/724] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:19:24.451 [720/724] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:19:24.451 [721/724] Linking target app/dpdk-test-regex 00:19:24.451 [722/724] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:19:24.711 [723/724] Linking target app/dpdk-testpmd 00:19:24.969 [724/724] Linking target app/dpdk-test-security-perf 00:19:24.969 13:30:39 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:19:24.969 13:30:39 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:19:24.969 13:30:39 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:19:25.227 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:19:25.227 [0/1] Installing files. 00:19:25.488 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:19:25.488 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.488 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:19:25.489 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:19:25.750 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.751 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:19:25.752 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:19:25.753 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:19:25.753 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.753 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.754 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.754 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.754 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.754 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:25.754 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:19:26.326 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:19:26.326 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:19:26.326 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.326 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:19:26.326 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.326 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.326 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.327 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.328 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.329 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:19:26.330 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:19:26.330 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:19:26.330 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:19:26.330 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:19:26.330 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:19:26.330 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:19:26.330 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:19:26.330 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:19:26.330 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:19:26.330 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:19:26.330 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:19:26.330 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:19:26.330 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:19:26.330 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:19:26.330 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:19:26.330 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:19:26.330 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:19:26.330 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:19:26.330 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:19:26.330 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:19:26.330 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:19:26.330 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:19:26.330 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:19:26.330 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:19:26.330 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:19:26.330 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:19:26.330 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:19:26.330 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:19:26.330 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:19:26.330 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:19:26.330 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:19:26.330 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:19:26.330 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:19:26.330 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:19:26.330 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:19:26.330 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:19:26.330 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:19:26.330 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:19:26.330 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:19:26.330 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:19:26.330 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:19:26.330 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:19:26.330 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:19:26.330 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:19:26.330 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:19:26.330 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:19:26.330 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:19:26.330 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:19:26.330 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:19:26.330 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:19:26.330 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:19:26.330 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:19:26.330 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:19:26.330 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:19:26.330 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:19:26.330 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:19:26.330 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:19:26.330 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:19:26.330 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:19:26.330 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:19:26.330 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:19:26.330 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:19:26.330 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:19:26.330 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:19:26.330 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:19:26.330 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:19:26.330 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:19:26.330 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:19:26.330 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:19:26.330 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:19:26.330 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:19:26.330 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:19:26.330 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:19:26.330 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:19:26.330 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:19:26.330 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:19:26.330 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:19:26.330 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:19:26.330 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:19:26.331 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:19:26.331 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:19:26.331 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:19:26.331 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:19:26.331 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:19:26.331 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:19:26.331 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:19:26.331 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:19:26.331 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:19:26.331 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:19:26.331 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:19:26.331 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:19:26.331 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:19:26.331 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:19:26.331 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:19:26.331 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:19:26.331 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:19:26.331 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:19:26.331 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:19:26.331 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:19:26.331 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:19:26.331 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:19:26.331 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:19:26.331 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:19:26.331 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:19:26.331 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:19:26.331 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:19:26.331 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:19:26.331 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:19:26.331 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:19:26.331 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:19:26.331 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:19:26.331 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:19:26.331 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:19:26.331 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:19:26.331 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:19:26.331 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:19:26.331 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:19:26.331 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:19:26.331 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:19:26.331 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:19:26.331 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:19:26.331 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:19:26.331 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:19:26.331 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:19:26.331 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:19:26.331 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:19:26.331 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:19:26.331 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:19:26.331 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:19:26.331 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:19:26.331 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:19:26.331 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:19:26.331 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:19:26.331 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:19:26.331 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:19:26.331 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:19:26.331 13:30:40 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:19:26.331 13:30:40 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:19:26.331 00:19:26.331 real 1m8.220s 00:19:26.331 user 8m21.671s 00:19:26.331 sys 1m19.880s 00:19:26.331 13:30:40 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:19:26.331 13:30:40 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:19:26.331 ************************************ 00:19:26.331 END TEST build_native_dpdk 00:19:26.331 ************************************ 00:19:26.331 13:30:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:19:26.331 13:30:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:19:26.331 13:30:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:19:26.331 13:30:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:19:26.331 13:30:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:19:26.331 13:30:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:19:26.331 13:30:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:19:26.331 13:30:40 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:19:26.590 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:19:26.590 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:19:26.590 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:19:26.590 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:27.157 Using 'verbs' RDMA provider 00:19:40.335 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:19:55.211 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:19:55.212 Creating mk/config.mk...done. 00:19:55.212 Creating mk/cc.flags.mk...done. 00:19:55.212 Type 'make' to build. 00:19:55.212 13:31:08 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:19:55.212 13:31:08 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:19:55.212 13:31:08 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:19:55.212 13:31:08 -- common/autotest_common.sh@10 -- $ set +x 00:19:55.212 ************************************ 00:19:55.212 START TEST make 00:19:55.212 ************************************ 00:19:55.212 13:31:08 make -- common/autotest_common.sh@1125 -- $ make -j10 00:19:55.212 make[1]: Nothing to be done for 'all'. 00:20:51.591 CC lib/ut/ut.o 00:20:51.591 CC lib/ut_mock/mock.o 00:20:51.591 CC lib/log/log.o 00:20:51.591 CC lib/log/log_flags.o 00:20:51.591 CC lib/log/log_deprecated.o 00:20:51.591 LIB libspdk_ut.a 00:20:51.591 LIB libspdk_ut_mock.a 00:20:51.591 SO libspdk_ut.so.2.0 00:20:51.591 SO libspdk_ut_mock.so.6.0 00:20:51.591 LIB libspdk_log.a 00:20:51.591 SYMLINK libspdk_ut.so 00:20:51.591 SYMLINK libspdk_ut_mock.so 00:20:51.591 SO libspdk_log.so.7.1 00:20:51.591 SYMLINK libspdk_log.so 00:20:51.591 CC lib/dma/dma.o 00:20:51.591 CXX lib/trace_parser/trace.o 00:20:51.591 CC lib/ioat/ioat.o 00:20:51.591 CC lib/util/base64.o 00:20:51.591 CC lib/util/cpuset.o 00:20:51.591 CC lib/util/bit_array.o 00:20:51.591 CC lib/util/crc32.o 00:20:51.591 CC lib/util/crc16.o 00:20:51.591 CC lib/util/crc32c.o 00:20:51.591 CC lib/vfio_user/host/vfio_user_pci.o 00:20:51.591 CC lib/util/crc32_ieee.o 00:20:51.591 CC lib/util/crc64.o 00:20:51.591 CC lib/util/dif.o 00:20:51.591 LIB libspdk_dma.a 00:20:51.591 SO libspdk_dma.so.5.0 00:20:51.591 CC lib/util/fd.o 00:20:51.591 CC lib/util/fd_group.o 00:20:51.591 SYMLINK libspdk_dma.so 00:20:51.591 CC lib/util/file.o 00:20:51.591 CC lib/util/hexlify.o 00:20:51.591 CC lib/vfio_user/host/vfio_user.o 00:20:51.591 CC lib/util/iov.o 00:20:51.591 LIB libspdk_ioat.a 00:20:51.591 CC lib/util/math.o 00:20:51.591 SO libspdk_ioat.so.7.0 00:20:51.591 CC lib/util/net.o 00:20:51.591 SYMLINK libspdk_ioat.so 00:20:51.591 CC lib/util/pipe.o 00:20:51.591 CC lib/util/strerror_tls.o 00:20:51.591 CC lib/util/string.o 00:20:51.591 LIB libspdk_vfio_user.a 00:20:51.591 CC lib/util/uuid.o 00:20:51.591 SO libspdk_vfio_user.so.5.0 00:20:51.591 CC lib/util/xor.o 00:20:51.591 CC lib/util/zipf.o 00:20:51.591 CC lib/util/md5.o 00:20:51.591 SYMLINK libspdk_vfio_user.so 00:20:51.591 LIB libspdk_util.a 00:20:51.591 SO libspdk_util.so.10.0 00:20:51.591 LIB libspdk_trace_parser.a 00:20:51.592 SYMLINK libspdk_util.so 00:20:51.592 SO libspdk_trace_parser.so.6.0 00:20:51.592 SYMLINK libspdk_trace_parser.so 00:20:51.592 CC lib/conf/conf.o 00:20:51.592 CC lib/json/json_parse.o 00:20:51.592 CC lib/json/json_util.o 00:20:51.592 CC lib/json/json_write.o 00:20:51.592 CC lib/vmd/vmd.o 00:20:51.592 CC lib/vmd/led.o 00:20:51.592 CC lib/env_dpdk/env.o 00:20:51.592 CC lib/rdma_provider/common.o 00:20:51.592 CC lib/rdma_utils/rdma_utils.o 00:20:51.592 CC lib/idxd/idxd.o 00:20:51.592 CC lib/idxd/idxd_user.o 00:20:51.592 LIB libspdk_conf.a 00:20:51.592 CC lib/env_dpdk/memory.o 00:20:51.592 CC lib/rdma_provider/rdma_provider_verbs.o 00:20:51.592 SO libspdk_conf.so.6.0 00:20:51.592 CC lib/env_dpdk/pci.o 00:20:51.592 SYMLINK libspdk_conf.so 00:20:51.592 CC lib/env_dpdk/init.o 00:20:51.592 LIB libspdk_rdma_utils.a 00:20:51.592 LIB libspdk_json.a 00:20:51.592 SO libspdk_rdma_utils.so.1.0 00:20:51.592 SO libspdk_json.so.6.0 00:20:51.592 SYMLINK libspdk_rdma_utils.so 00:20:51.592 CC lib/env_dpdk/threads.o 00:20:51.592 CC lib/env_dpdk/pci_ioat.o 00:20:51.592 SYMLINK libspdk_json.so 00:20:51.592 CC lib/idxd/idxd_kernel.o 00:20:51.592 LIB libspdk_rdma_provider.a 00:20:51.592 SO libspdk_rdma_provider.so.6.0 00:20:51.592 CC lib/env_dpdk/pci_virtio.o 00:20:51.592 CC lib/env_dpdk/pci_vmd.o 00:20:51.592 CC lib/env_dpdk/pci_idxd.o 00:20:51.592 SYMLINK libspdk_rdma_provider.so 00:20:51.592 CC lib/env_dpdk/pci_event.o 00:20:51.592 CC lib/env_dpdk/sigbus_handler.o 00:20:51.592 CC lib/env_dpdk/pci_dpdk.o 00:20:51.592 CC lib/env_dpdk/pci_dpdk_2207.o 00:20:51.592 CC lib/env_dpdk/pci_dpdk_2211.o 00:20:51.592 CC lib/jsonrpc/jsonrpc_server.o 00:20:51.592 LIB libspdk_idxd.a 00:20:51.592 SO libspdk_idxd.so.12.1 00:20:51.592 LIB libspdk_vmd.a 00:20:51.592 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:20:51.592 CC lib/jsonrpc/jsonrpc_client.o 00:20:51.592 SO libspdk_vmd.so.6.0 00:20:51.592 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:20:51.592 SYMLINK libspdk_idxd.so 00:20:51.592 SYMLINK libspdk_vmd.so 00:20:51.592 LIB libspdk_jsonrpc.a 00:20:51.592 SO libspdk_jsonrpc.so.6.0 00:20:51.592 SYMLINK libspdk_jsonrpc.so 00:20:51.850 CC lib/rpc/rpc.o 00:20:52.108 LIB libspdk_env_dpdk.a 00:20:52.108 LIB libspdk_rpc.a 00:20:52.108 SO libspdk_rpc.so.6.0 00:20:52.108 SO libspdk_env_dpdk.so.15.1 00:20:52.108 SYMLINK libspdk_rpc.so 00:20:52.366 SYMLINK libspdk_env_dpdk.so 00:20:52.366 CC lib/trace/trace.o 00:20:52.366 CC lib/trace/trace_flags.o 00:20:52.366 CC lib/trace/trace_rpc.o 00:20:52.366 CC lib/notify/notify_rpc.o 00:20:52.366 CC lib/notify/notify.o 00:20:52.366 CC lib/keyring/keyring.o 00:20:52.366 CC lib/keyring/keyring_rpc.o 00:20:52.624 LIB libspdk_notify.a 00:20:52.624 SO libspdk_notify.so.6.0 00:20:52.624 SYMLINK libspdk_notify.so 00:20:52.624 LIB libspdk_keyring.a 00:20:52.624 LIB libspdk_trace.a 00:20:52.624 SO libspdk_keyring.so.2.0 00:20:52.882 SO libspdk_trace.so.11.0 00:20:52.882 SYMLINK libspdk_keyring.so 00:20:52.882 SYMLINK libspdk_trace.so 00:20:53.140 CC lib/thread/thread.o 00:20:53.140 CC lib/thread/iobuf.o 00:20:53.140 CC lib/sock/sock.o 00:20:53.140 CC lib/sock/sock_rpc.o 00:20:53.707 LIB libspdk_sock.a 00:20:53.707 SO libspdk_sock.so.10.0 00:20:53.707 SYMLINK libspdk_sock.so 00:20:54.273 CC lib/nvme/nvme_ctrlr_cmd.o 00:20:54.273 CC lib/nvme/nvme_ctrlr.o 00:20:54.273 CC lib/nvme/nvme_fabric.o 00:20:54.273 CC lib/nvme/nvme_pcie_common.o 00:20:54.273 CC lib/nvme/nvme_ns_cmd.o 00:20:54.273 CC lib/nvme/nvme_ns.o 00:20:54.273 CC lib/nvme/nvme_pcie.o 00:20:54.273 CC lib/nvme/nvme.o 00:20:54.273 CC lib/nvme/nvme_qpair.o 00:20:55.209 CC lib/nvme/nvme_quirks.o 00:20:55.209 CC lib/nvme/nvme_transport.o 00:20:55.209 CC lib/nvme/nvme_discovery.o 00:20:55.209 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:20:55.209 LIB libspdk_thread.a 00:20:55.209 SO libspdk_thread.so.11.0 00:20:55.209 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:20:55.209 CC lib/nvme/nvme_tcp.o 00:20:55.209 CC lib/nvme/nvme_opal.o 00:20:55.209 SYMLINK libspdk_thread.so 00:20:55.468 CC lib/nvme/nvme_io_msg.o 00:20:55.468 CC lib/nvme/nvme_poll_group.o 00:20:55.725 CC lib/nvme/nvme_zns.o 00:20:55.983 CC lib/nvme/nvme_stubs.o 00:20:55.983 CC lib/accel/accel.o 00:20:55.983 CC lib/nvme/nvme_auth.o 00:20:55.983 CC lib/blob/blobstore.o 00:20:55.983 CC lib/blob/request.o 00:20:55.983 CC lib/blob/zeroes.o 00:20:56.242 CC lib/nvme/nvme_cuse.o 00:20:56.242 CC lib/accel/accel_rpc.o 00:20:56.500 CC lib/accel/accel_sw.o 00:20:56.500 CC lib/nvme/nvme_rdma.o 00:20:56.500 CC lib/blob/blob_bs_dev.o 00:20:56.757 CC lib/init/json_config.o 00:20:57.014 CC lib/virtio/virtio.o 00:20:57.014 CC lib/fsdev/fsdev.o 00:20:57.014 CC lib/init/subsystem.o 00:20:57.014 CC lib/fsdev/fsdev_io.o 00:20:57.272 CC lib/fsdev/fsdev_rpc.o 00:20:57.272 CC lib/virtio/virtio_vhost_user.o 00:20:57.272 CC lib/init/subsystem_rpc.o 00:20:57.272 CC lib/virtio/virtio_vfio_user.o 00:20:57.272 CC lib/init/rpc.o 00:20:57.272 CC lib/virtio/virtio_pci.o 00:20:57.530 LIB libspdk_accel.a 00:20:57.530 SO libspdk_accel.so.16.0 00:20:57.530 SYMLINK libspdk_accel.so 00:20:57.530 LIB libspdk_init.a 00:20:57.530 SO libspdk_init.so.6.0 00:20:57.788 SYMLINK libspdk_init.so 00:20:57.788 LIB libspdk_virtio.a 00:20:57.788 SO libspdk_virtio.so.7.0 00:20:57.788 LIB libspdk_fsdev.a 00:20:57.788 CC lib/bdev/bdev.o 00:20:57.788 CC lib/bdev/bdev_rpc.o 00:20:57.788 CC lib/bdev/bdev_zone.o 00:20:57.788 CC lib/bdev/part.o 00:20:57.788 CC lib/bdev/scsi_nvme.o 00:20:57.788 SO libspdk_fsdev.so.2.0 00:20:57.788 SYMLINK libspdk_virtio.so 00:20:57.788 CC lib/event/app.o 00:20:57.788 CC lib/event/reactor.o 00:20:58.046 SYMLINK libspdk_fsdev.so 00:20:58.046 CC lib/event/log_rpc.o 00:20:58.046 CC lib/event/app_rpc.o 00:20:58.046 CC lib/event/scheduler_static.o 00:20:58.305 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:20:58.305 LIB libspdk_nvme.a 00:20:58.564 SO libspdk_nvme.so.14.1 00:20:58.564 LIB libspdk_event.a 00:20:58.564 SO libspdk_event.so.14.0 00:20:58.823 SYMLINK libspdk_event.so 00:20:58.823 SYMLINK libspdk_nvme.so 00:20:59.082 LIB libspdk_fuse_dispatcher.a 00:20:59.082 SO libspdk_fuse_dispatcher.so.1.0 00:20:59.082 SYMLINK libspdk_fuse_dispatcher.so 00:21:00.459 LIB libspdk_blob.a 00:21:00.459 SO libspdk_blob.so.11.0 00:21:00.717 SYMLINK libspdk_blob.so 00:21:00.976 CC lib/lvol/lvol.o 00:21:00.976 CC lib/blobfs/blobfs.o 00:21:00.976 CC lib/blobfs/tree.o 00:21:01.234 LIB libspdk_bdev.a 00:21:01.493 SO libspdk_bdev.so.17.0 00:21:01.493 SYMLINK libspdk_bdev.so 00:21:01.752 CC lib/nvmf/ctrlr.o 00:21:01.752 CC lib/nvmf/ctrlr_discovery.o 00:21:01.752 CC lib/nvmf/ctrlr_bdev.o 00:21:01.752 CC lib/nvmf/subsystem.o 00:21:01.752 CC lib/ftl/ftl_core.o 00:21:01.752 CC lib/nbd/nbd.o 00:21:01.752 CC lib/scsi/dev.o 00:21:01.752 CC lib/ublk/ublk.o 00:21:02.009 LIB libspdk_blobfs.a 00:21:02.009 SO libspdk_blobfs.so.10.0 00:21:02.009 CC lib/scsi/lun.o 00:21:02.267 LIB libspdk_lvol.a 00:21:02.267 SYMLINK libspdk_blobfs.so 00:21:02.267 CC lib/scsi/port.o 00:21:02.267 SO libspdk_lvol.so.10.0 00:21:02.267 SYMLINK libspdk_lvol.so 00:21:02.267 CC lib/scsi/scsi.o 00:21:02.267 CC lib/ftl/ftl_init.o 00:21:02.267 CC lib/ublk/ublk_rpc.o 00:21:02.267 CC lib/nbd/nbd_rpc.o 00:21:02.526 CC lib/nvmf/nvmf.o 00:21:02.526 CC lib/scsi/scsi_bdev.o 00:21:02.526 CC lib/ftl/ftl_layout.o 00:21:02.526 CC lib/nvmf/nvmf_rpc.o 00:21:02.526 LIB libspdk_nbd.a 00:21:02.526 CC lib/nvmf/transport.o 00:21:02.526 SO libspdk_nbd.so.7.0 00:21:02.526 LIB libspdk_ublk.a 00:21:02.785 SO libspdk_ublk.so.3.0 00:21:02.785 SYMLINK libspdk_nbd.so 00:21:02.785 CC lib/nvmf/tcp.o 00:21:02.785 CC lib/nvmf/stubs.o 00:21:02.785 SYMLINK libspdk_ublk.so 00:21:02.785 CC lib/nvmf/mdns_server.o 00:21:02.785 CC lib/ftl/ftl_debug.o 00:21:03.043 CC lib/scsi/scsi_pr.o 00:21:03.301 CC lib/ftl/ftl_io.o 00:21:03.301 CC lib/nvmf/rdma.o 00:21:03.301 CC lib/nvmf/auth.o 00:21:03.301 CC lib/scsi/scsi_rpc.o 00:21:03.560 CC lib/scsi/task.o 00:21:03.560 CC lib/ftl/ftl_sb.o 00:21:03.560 CC lib/ftl/ftl_l2p.o 00:21:03.560 CC lib/ftl/ftl_l2p_flat.o 00:21:03.560 CC lib/ftl/ftl_nv_cache.o 00:21:03.560 CC lib/ftl/ftl_band.o 00:21:03.818 LIB libspdk_scsi.a 00:21:03.818 CC lib/ftl/ftl_band_ops.o 00:21:03.818 CC lib/ftl/ftl_writer.o 00:21:03.818 SO libspdk_scsi.so.9.0 00:21:03.818 CC lib/ftl/ftl_rq.o 00:21:04.075 SYMLINK libspdk_scsi.so 00:21:04.075 CC lib/ftl/ftl_reloc.o 00:21:04.075 CC lib/ftl/ftl_l2p_cache.o 00:21:04.075 CC lib/ftl/ftl_p2l.o 00:21:04.075 CC lib/ftl/ftl_p2l_log.o 00:21:04.333 CC lib/ftl/mngt/ftl_mngt.o 00:21:04.333 CC lib/iscsi/conn.o 00:21:04.333 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:21:04.333 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:21:04.591 CC lib/iscsi/init_grp.o 00:21:04.591 CC lib/iscsi/iscsi.o 00:21:04.591 CC lib/ftl/mngt/ftl_mngt_startup.o 00:21:04.591 CC lib/ftl/mngt/ftl_mngt_md.o 00:21:04.849 CC lib/vhost/vhost.o 00:21:04.849 CC lib/iscsi/param.o 00:21:04.849 CC lib/ftl/mngt/ftl_mngt_misc.o 00:21:04.849 CC lib/iscsi/portal_grp.o 00:21:04.849 CC lib/iscsi/tgt_node.o 00:21:04.849 CC lib/iscsi/iscsi_subsystem.o 00:21:05.107 CC lib/iscsi/iscsi_rpc.o 00:21:05.107 CC lib/iscsi/task.o 00:21:05.107 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:21:05.107 CC lib/vhost/vhost_rpc.o 00:21:05.107 CC lib/vhost/vhost_scsi.o 00:21:05.365 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:21:05.365 CC lib/ftl/mngt/ftl_mngt_band.o 00:21:05.365 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:21:05.623 CC lib/vhost/vhost_blk.o 00:21:05.623 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:21:05.623 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:21:05.623 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:21:05.623 CC lib/ftl/utils/ftl_conf.o 00:21:05.881 CC lib/ftl/utils/ftl_md.o 00:21:05.881 CC lib/vhost/rte_vhost_user.o 00:21:05.881 CC lib/ftl/utils/ftl_mempool.o 00:21:05.881 CC lib/ftl/utils/ftl_bitmap.o 00:21:05.881 CC lib/ftl/utils/ftl_property.o 00:21:05.881 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:21:06.139 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:21:06.139 LIB libspdk_nvmf.a 00:21:06.139 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:21:06.139 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:21:06.397 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:21:06.397 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:21:06.397 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:21:06.397 SO libspdk_nvmf.so.20.0 00:21:06.397 CC lib/ftl/upgrade/ftl_sb_v3.o 00:21:06.397 CC lib/ftl/upgrade/ftl_sb_v5.o 00:21:06.397 CC lib/ftl/nvc/ftl_nvc_dev.o 00:21:06.397 LIB libspdk_iscsi.a 00:21:06.654 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:21:06.655 SO libspdk_iscsi.so.8.0 00:21:06.655 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:21:06.655 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:21:06.655 SYMLINK libspdk_nvmf.so 00:21:06.655 CC lib/ftl/base/ftl_base_dev.o 00:21:06.655 CC lib/ftl/base/ftl_base_bdev.o 00:21:06.655 CC lib/ftl/ftl_trace.o 00:21:06.655 SYMLINK libspdk_iscsi.so 00:21:06.913 LIB libspdk_ftl.a 00:21:07.172 LIB libspdk_vhost.a 00:21:07.172 SO libspdk_vhost.so.8.0 00:21:07.172 SO libspdk_ftl.so.9.0 00:21:07.172 SYMLINK libspdk_vhost.so 00:21:07.489 SYMLINK libspdk_ftl.so 00:21:08.055 CC module/env_dpdk/env_dpdk_rpc.o 00:21:08.055 CC module/scheduler/dynamic/scheduler_dynamic.o 00:21:08.055 CC module/accel/error/accel_error.o 00:21:08.055 CC module/accel/ioat/accel_ioat.o 00:21:08.055 CC module/accel/dsa/accel_dsa.o 00:21:08.055 CC module/fsdev/aio/fsdev_aio.o 00:21:08.055 CC module/keyring/file/keyring.o 00:21:08.055 CC module/sock/posix/posix.o 00:21:08.055 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:21:08.055 CC module/blob/bdev/blob_bdev.o 00:21:08.055 LIB libspdk_env_dpdk_rpc.a 00:21:08.055 SO libspdk_env_dpdk_rpc.so.6.0 00:21:08.055 SYMLINK libspdk_env_dpdk_rpc.so 00:21:08.055 CC module/accel/error/accel_error_rpc.o 00:21:08.312 CC module/keyring/file/keyring_rpc.o 00:21:08.312 LIB libspdk_scheduler_dpdk_governor.a 00:21:08.312 SO libspdk_scheduler_dpdk_governor.so.4.0 00:21:08.312 LIB libspdk_scheduler_dynamic.a 00:21:08.312 CC module/accel/ioat/accel_ioat_rpc.o 00:21:08.312 SO libspdk_scheduler_dynamic.so.4.0 00:21:08.312 SYMLINK libspdk_scheduler_dpdk_governor.so 00:21:08.312 LIB libspdk_accel_error.a 00:21:08.312 SYMLINK libspdk_scheduler_dynamic.so 00:21:08.312 CC module/fsdev/aio/fsdev_aio_rpc.o 00:21:08.312 LIB libspdk_keyring_file.a 00:21:08.312 LIB libspdk_blob_bdev.a 00:21:08.312 SO libspdk_accel_error.so.2.0 00:21:08.312 CC module/accel/dsa/accel_dsa_rpc.o 00:21:08.312 SO libspdk_keyring_file.so.2.0 00:21:08.312 SO libspdk_blob_bdev.so.11.0 00:21:08.312 CC module/accel/iaa/accel_iaa.o 00:21:08.312 LIB libspdk_accel_ioat.a 00:21:08.571 SYMLINK libspdk_accel_error.so 00:21:08.571 CC module/accel/iaa/accel_iaa_rpc.o 00:21:08.571 SO libspdk_accel_ioat.so.6.0 00:21:08.571 SYMLINK libspdk_blob_bdev.so 00:21:08.571 SYMLINK libspdk_keyring_file.so 00:21:08.571 CC module/fsdev/aio/linux_aio_mgr.o 00:21:08.571 CC module/scheduler/gscheduler/gscheduler.o 00:21:08.571 SYMLINK libspdk_accel_ioat.so 00:21:08.571 LIB libspdk_accel_dsa.a 00:21:08.571 SO libspdk_accel_dsa.so.5.0 00:21:08.571 LIB libspdk_accel_iaa.a 00:21:08.828 SYMLINK libspdk_accel_dsa.so 00:21:08.828 CC module/keyring/linux/keyring.o 00:21:08.828 LIB libspdk_scheduler_gscheduler.a 00:21:08.828 SO libspdk_accel_iaa.so.3.0 00:21:08.828 SO libspdk_scheduler_gscheduler.so.4.0 00:21:08.828 SYMLINK libspdk_accel_iaa.so 00:21:08.828 SYMLINK libspdk_scheduler_gscheduler.so 00:21:08.828 CC module/keyring/linux/keyring_rpc.o 00:21:08.829 CC module/bdev/delay/vbdev_delay.o 00:21:08.829 CC module/blobfs/bdev/blobfs_bdev.o 00:21:08.829 CC module/bdev/error/vbdev_error.o 00:21:08.829 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:21:08.829 CC module/bdev/gpt/gpt.o 00:21:08.829 LIB libspdk_fsdev_aio.a 00:21:08.829 SO libspdk_fsdev_aio.so.1.0 00:21:09.086 CC module/bdev/lvol/vbdev_lvol.o 00:21:09.086 LIB libspdk_keyring_linux.a 00:21:09.086 LIB libspdk_sock_posix.a 00:21:09.086 CC module/bdev/malloc/bdev_malloc.o 00:21:09.086 SO libspdk_keyring_linux.so.1.0 00:21:09.086 SO libspdk_sock_posix.so.6.0 00:21:09.086 SYMLINK libspdk_fsdev_aio.so 00:21:09.086 LIB libspdk_blobfs_bdev.a 00:21:09.086 SYMLINK libspdk_keyring_linux.so 00:21:09.086 SO libspdk_blobfs_bdev.so.6.0 00:21:09.086 SYMLINK libspdk_sock_posix.so 00:21:09.086 CC module/bdev/gpt/vbdev_gpt.o 00:21:09.086 CC module/bdev/error/vbdev_error_rpc.o 00:21:09.086 SYMLINK libspdk_blobfs_bdev.so 00:21:09.086 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:21:09.343 CC module/bdev/nvme/bdev_nvme.o 00:21:09.343 CC module/bdev/passthru/vbdev_passthru.o 00:21:09.343 CC module/bdev/null/bdev_null.o 00:21:09.343 CC module/bdev/raid/bdev_raid.o 00:21:09.343 CC module/bdev/delay/vbdev_delay_rpc.o 00:21:09.343 LIB libspdk_bdev_error.a 00:21:09.343 SO libspdk_bdev_error.so.6.0 00:21:09.343 SYMLINK libspdk_bdev_error.so 00:21:09.602 CC module/bdev/malloc/bdev_malloc_rpc.o 00:21:09.602 LIB libspdk_bdev_gpt.a 00:21:09.602 LIB libspdk_bdev_delay.a 00:21:09.602 SO libspdk_bdev_gpt.so.6.0 00:21:09.602 SO libspdk_bdev_delay.so.6.0 00:21:09.602 CC module/bdev/null/bdev_null_rpc.o 00:21:09.602 SYMLINK libspdk_bdev_gpt.so 00:21:09.602 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:21:09.602 SYMLINK libspdk_bdev_delay.so 00:21:09.602 CC module/bdev/split/vbdev_split.o 00:21:09.602 LIB libspdk_bdev_lvol.a 00:21:09.602 LIB libspdk_bdev_malloc.a 00:21:09.602 SO libspdk_bdev_lvol.so.6.0 00:21:09.602 SO libspdk_bdev_malloc.so.6.0 00:21:09.861 CC module/bdev/zone_block/vbdev_zone_block.o 00:21:09.861 LIB libspdk_bdev_null.a 00:21:09.861 CC module/bdev/aio/bdev_aio.o 00:21:09.861 SYMLINK libspdk_bdev_lvol.so 00:21:09.861 SYMLINK libspdk_bdev_malloc.so 00:21:09.861 CC module/bdev/ftl/bdev_ftl.o 00:21:09.861 CC module/bdev/split/vbdev_split_rpc.o 00:21:09.861 LIB libspdk_bdev_passthru.a 00:21:09.861 SO libspdk_bdev_null.so.6.0 00:21:09.861 SO libspdk_bdev_passthru.so.6.0 00:21:09.861 SYMLINK libspdk_bdev_null.so 00:21:09.861 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:21:09.861 CC module/bdev/aio/bdev_aio_rpc.o 00:21:09.861 SYMLINK libspdk_bdev_passthru.so 00:21:09.861 CC module/bdev/iscsi/bdev_iscsi.o 00:21:10.119 LIB libspdk_bdev_split.a 00:21:10.119 SO libspdk_bdev_split.so.6.0 00:21:10.119 CC module/bdev/raid/bdev_raid_rpc.o 00:21:10.119 SYMLINK libspdk_bdev_split.so 00:21:10.119 CC module/bdev/ftl/bdev_ftl_rpc.o 00:21:10.119 CC module/bdev/raid/bdev_raid_sb.o 00:21:10.119 CC module/bdev/raid/raid0.o 00:21:10.119 LIB libspdk_bdev_zone_block.a 00:21:10.119 CC module/bdev/virtio/bdev_virtio_scsi.o 00:21:10.119 LIB libspdk_bdev_aio.a 00:21:10.377 SO libspdk_bdev_zone_block.so.6.0 00:21:10.377 SO libspdk_bdev_aio.so.6.0 00:21:10.377 SYMLINK libspdk_bdev_zone_block.so 00:21:10.377 CC module/bdev/virtio/bdev_virtio_blk.o 00:21:10.377 SYMLINK libspdk_bdev_aio.so 00:21:10.377 CC module/bdev/virtio/bdev_virtio_rpc.o 00:21:10.377 LIB libspdk_bdev_ftl.a 00:21:10.377 CC module/bdev/raid/raid1.o 00:21:10.377 SO libspdk_bdev_ftl.so.6.0 00:21:10.377 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:21:10.377 CC module/bdev/raid/concat.o 00:21:10.377 SYMLINK libspdk_bdev_ftl.so 00:21:10.377 CC module/bdev/nvme/bdev_nvme_rpc.o 00:21:10.377 CC module/bdev/raid/raid5f.o 00:21:10.636 CC module/bdev/nvme/nvme_rpc.o 00:21:10.636 LIB libspdk_bdev_iscsi.a 00:21:10.636 CC module/bdev/nvme/bdev_mdns_client.o 00:21:10.636 SO libspdk_bdev_iscsi.so.6.0 00:21:10.636 CC module/bdev/nvme/vbdev_opal.o 00:21:10.636 CC module/bdev/nvme/vbdev_opal_rpc.o 00:21:10.636 SYMLINK libspdk_bdev_iscsi.so 00:21:10.636 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:21:10.894 LIB libspdk_bdev_virtio.a 00:21:10.894 SO libspdk_bdev_virtio.so.6.0 00:21:10.894 SYMLINK libspdk_bdev_virtio.so 00:21:11.153 LIB libspdk_bdev_raid.a 00:21:11.153 SO libspdk_bdev_raid.so.6.0 00:21:11.411 SYMLINK libspdk_bdev_raid.so 00:21:12.821 LIB libspdk_bdev_nvme.a 00:21:12.821 SO libspdk_bdev_nvme.so.7.1 00:21:12.821 SYMLINK libspdk_bdev_nvme.so 00:21:13.206 CC module/event/subsystems/vmd/vmd.o 00:21:13.206 CC module/event/subsystems/vmd/vmd_rpc.o 00:21:13.463 CC module/event/subsystems/scheduler/scheduler.o 00:21:13.463 CC module/event/subsystems/iobuf/iobuf.o 00:21:13.463 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:21:13.463 CC module/event/subsystems/fsdev/fsdev.o 00:21:13.463 CC module/event/subsystems/sock/sock.o 00:21:13.463 CC module/event/subsystems/keyring/keyring.o 00:21:13.463 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:21:13.463 LIB libspdk_event_scheduler.a 00:21:13.463 LIB libspdk_event_keyring.a 00:21:13.463 LIB libspdk_event_fsdev.a 00:21:13.463 LIB libspdk_event_vhost_blk.a 00:21:13.463 LIB libspdk_event_vmd.a 00:21:13.463 LIB libspdk_event_iobuf.a 00:21:13.463 LIB libspdk_event_sock.a 00:21:13.463 SO libspdk_event_scheduler.so.4.0 00:21:13.463 SO libspdk_event_keyring.so.1.0 00:21:13.463 SO libspdk_event_fsdev.so.1.0 00:21:13.463 SO libspdk_event_vhost_blk.so.3.0 00:21:13.463 SO libspdk_event_sock.so.5.0 00:21:13.463 SO libspdk_event_iobuf.so.3.0 00:21:13.463 SO libspdk_event_vmd.so.6.0 00:21:13.722 SYMLINK libspdk_event_keyring.so 00:21:13.722 SYMLINK libspdk_event_fsdev.so 00:21:13.722 SYMLINK libspdk_event_scheduler.so 00:21:13.722 SYMLINK libspdk_event_iobuf.so 00:21:13.722 SYMLINK libspdk_event_vhost_blk.so 00:21:13.722 SYMLINK libspdk_event_sock.so 00:21:13.722 SYMLINK libspdk_event_vmd.so 00:21:14.019 CC module/event/subsystems/accel/accel.o 00:21:14.019 LIB libspdk_event_accel.a 00:21:14.019 SO libspdk_event_accel.so.6.0 00:21:14.276 SYMLINK libspdk_event_accel.so 00:21:14.532 CC module/event/subsystems/bdev/bdev.o 00:21:14.789 LIB libspdk_event_bdev.a 00:21:14.789 SO libspdk_event_bdev.so.6.0 00:21:14.789 SYMLINK libspdk_event_bdev.so 00:21:15.046 CC module/event/subsystems/nbd/nbd.o 00:21:15.046 CC module/event/subsystems/scsi/scsi.o 00:21:15.046 CC module/event/subsystems/ublk/ublk.o 00:21:15.046 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:21:15.046 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:21:15.046 LIB libspdk_event_nbd.a 00:21:15.046 LIB libspdk_event_ublk.a 00:21:15.304 LIB libspdk_event_scsi.a 00:21:15.304 SO libspdk_event_ublk.so.3.0 00:21:15.304 SO libspdk_event_nbd.so.6.0 00:21:15.304 SO libspdk_event_scsi.so.6.0 00:21:15.304 SYMLINK libspdk_event_nbd.so 00:21:15.304 SYMLINK libspdk_event_ublk.so 00:21:15.304 SYMLINK libspdk_event_scsi.so 00:21:15.304 LIB libspdk_event_nvmf.a 00:21:15.304 SO libspdk_event_nvmf.so.6.0 00:21:15.304 SYMLINK libspdk_event_nvmf.so 00:21:15.561 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:21:15.561 CC module/event/subsystems/iscsi/iscsi.o 00:21:15.819 LIB libspdk_event_vhost_scsi.a 00:21:15.819 LIB libspdk_event_iscsi.a 00:21:15.819 SO libspdk_event_vhost_scsi.so.3.0 00:21:15.819 SO libspdk_event_iscsi.so.6.0 00:21:15.819 SYMLINK libspdk_event_vhost_scsi.so 00:21:15.819 SYMLINK libspdk_event_iscsi.so 00:21:16.078 SO libspdk.so.6.0 00:21:16.078 SYMLINK libspdk.so 00:21:16.336 CC app/spdk_nvme_perf/perf.o 00:21:16.336 CC app/spdk_lspci/spdk_lspci.o 00:21:16.336 CC app/spdk_nvme_identify/identify.o 00:21:16.336 CXX app/trace/trace.o 00:21:16.336 CC app/trace_record/trace_record.o 00:21:16.336 CC app/nvmf_tgt/nvmf_main.o 00:21:16.336 CC app/iscsi_tgt/iscsi_tgt.o 00:21:16.336 CC test/thread/poller_perf/poller_perf.o 00:21:16.336 CC examples/util/zipf/zipf.o 00:21:16.336 CC app/spdk_tgt/spdk_tgt.o 00:21:16.593 LINK nvmf_tgt 00:21:16.593 LINK poller_perf 00:21:16.593 LINK spdk_lspci 00:21:16.593 LINK zipf 00:21:16.593 LINK iscsi_tgt 00:21:16.593 LINK spdk_tgt 00:21:16.593 LINK spdk_trace_record 00:21:16.593 LINK spdk_trace 00:21:16.851 CC app/spdk_nvme_discover/discovery_aer.o 00:21:16.851 CC app/spdk_top/spdk_top.o 00:21:16.851 CC test/dma/test_dma/test_dma.o 00:21:16.851 CC app/spdk_dd/spdk_dd.o 00:21:17.108 CC examples/ioat/perf/perf.o 00:21:17.108 CC examples/vmd/lsvmd/lsvmd.o 00:21:17.108 LINK spdk_nvme_discover 00:21:17.108 CC examples/idxd/perf/perf.o 00:21:17.108 CC app/fio/nvme/fio_plugin.o 00:21:17.108 LINK lsvmd 00:21:17.366 LINK ioat_perf 00:21:17.366 LINK spdk_nvme_perf 00:21:17.366 LINK spdk_nvme_identify 00:21:17.366 CC app/vhost/vhost.o 00:21:17.366 LINK spdk_dd 00:21:17.366 CC examples/vmd/led/led.o 00:21:17.643 LINK idxd_perf 00:21:17.643 CC examples/ioat/verify/verify.o 00:21:17.643 LINK vhost 00:21:17.643 LINK led 00:21:17.643 LINK test_dma 00:21:17.643 TEST_HEADER include/spdk/accel.h 00:21:17.643 TEST_HEADER include/spdk/accel_module.h 00:21:17.643 TEST_HEADER include/spdk/assert.h 00:21:17.643 TEST_HEADER include/spdk/barrier.h 00:21:17.643 TEST_HEADER include/spdk/base64.h 00:21:17.643 TEST_HEADER include/spdk/bdev.h 00:21:17.643 TEST_HEADER include/spdk/bdev_module.h 00:21:17.643 TEST_HEADER include/spdk/bdev_zone.h 00:21:17.643 TEST_HEADER include/spdk/bit_array.h 00:21:17.643 TEST_HEADER include/spdk/bit_pool.h 00:21:17.644 TEST_HEADER include/spdk/blob_bdev.h 00:21:17.644 TEST_HEADER include/spdk/blobfs_bdev.h 00:21:17.644 TEST_HEADER include/spdk/blobfs.h 00:21:17.644 TEST_HEADER include/spdk/blob.h 00:21:17.644 TEST_HEADER include/spdk/conf.h 00:21:17.644 TEST_HEADER include/spdk/config.h 00:21:17.644 TEST_HEADER include/spdk/cpuset.h 00:21:17.644 TEST_HEADER include/spdk/crc16.h 00:21:17.644 TEST_HEADER include/spdk/crc32.h 00:21:17.644 TEST_HEADER include/spdk/crc64.h 00:21:17.644 TEST_HEADER include/spdk/dif.h 00:21:17.644 TEST_HEADER include/spdk/dma.h 00:21:17.644 TEST_HEADER include/spdk/endian.h 00:21:17.644 TEST_HEADER include/spdk/env_dpdk.h 00:21:17.644 TEST_HEADER include/spdk/env.h 00:21:17.644 TEST_HEADER include/spdk/event.h 00:21:17.644 TEST_HEADER include/spdk/fd_group.h 00:21:17.644 TEST_HEADER include/spdk/fd.h 00:21:17.644 TEST_HEADER include/spdk/file.h 00:21:17.644 TEST_HEADER include/spdk/fsdev.h 00:21:17.644 TEST_HEADER include/spdk/fsdev_module.h 00:21:17.644 CC test/app/bdev_svc/bdev_svc.o 00:21:17.644 TEST_HEADER include/spdk/ftl.h 00:21:17.644 TEST_HEADER include/spdk/fuse_dispatcher.h 00:21:17.644 TEST_HEADER include/spdk/gpt_spec.h 00:21:17.644 TEST_HEADER include/spdk/hexlify.h 00:21:17.644 TEST_HEADER include/spdk/histogram_data.h 00:21:17.644 TEST_HEADER include/spdk/idxd.h 00:21:17.644 TEST_HEADER include/spdk/idxd_spec.h 00:21:17.644 TEST_HEADER include/spdk/init.h 00:21:17.644 TEST_HEADER include/spdk/ioat.h 00:21:17.644 TEST_HEADER include/spdk/ioat_spec.h 00:21:17.644 TEST_HEADER include/spdk/iscsi_spec.h 00:21:17.644 TEST_HEADER include/spdk/json.h 00:21:17.644 TEST_HEADER include/spdk/jsonrpc.h 00:21:17.644 TEST_HEADER include/spdk/keyring.h 00:21:17.644 TEST_HEADER include/spdk/keyring_module.h 00:21:17.644 TEST_HEADER include/spdk/likely.h 00:21:17.644 TEST_HEADER include/spdk/log.h 00:21:17.644 CC app/fio/bdev/fio_plugin.o 00:21:17.644 TEST_HEADER include/spdk/lvol.h 00:21:17.644 TEST_HEADER include/spdk/md5.h 00:21:17.644 TEST_HEADER include/spdk/memory.h 00:21:17.644 TEST_HEADER include/spdk/mmio.h 00:21:17.644 TEST_HEADER include/spdk/nbd.h 00:21:17.644 TEST_HEADER include/spdk/net.h 00:21:17.644 TEST_HEADER include/spdk/notify.h 00:21:17.644 TEST_HEADER include/spdk/nvme.h 00:21:17.644 CC examples/interrupt_tgt/interrupt_tgt.o 00:21:17.644 TEST_HEADER include/spdk/nvme_intel.h 00:21:17.644 TEST_HEADER include/spdk/nvme_ocssd.h 00:21:17.903 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:21:17.903 TEST_HEADER include/spdk/nvme_spec.h 00:21:17.903 TEST_HEADER include/spdk/nvme_zns.h 00:21:17.903 TEST_HEADER include/spdk/nvmf_cmd.h 00:21:17.903 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:21:17.903 TEST_HEADER include/spdk/nvmf.h 00:21:17.903 TEST_HEADER include/spdk/nvmf_spec.h 00:21:17.903 TEST_HEADER include/spdk/nvmf_transport.h 00:21:17.903 TEST_HEADER include/spdk/opal.h 00:21:17.903 TEST_HEADER include/spdk/opal_spec.h 00:21:17.903 TEST_HEADER include/spdk/pci_ids.h 00:21:17.903 TEST_HEADER include/spdk/pipe.h 00:21:17.903 TEST_HEADER include/spdk/queue.h 00:21:17.903 LINK verify 00:21:17.903 TEST_HEADER include/spdk/reduce.h 00:21:17.903 TEST_HEADER include/spdk/rpc.h 00:21:17.903 TEST_HEADER include/spdk/scheduler.h 00:21:17.903 TEST_HEADER include/spdk/scsi.h 00:21:17.903 TEST_HEADER include/spdk/scsi_spec.h 00:21:17.903 TEST_HEADER include/spdk/sock.h 00:21:17.903 LINK spdk_nvme 00:21:17.903 TEST_HEADER include/spdk/stdinc.h 00:21:17.903 TEST_HEADER include/spdk/string.h 00:21:17.903 TEST_HEADER include/spdk/thread.h 00:21:17.903 TEST_HEADER include/spdk/trace.h 00:21:17.903 TEST_HEADER include/spdk/trace_parser.h 00:21:17.903 TEST_HEADER include/spdk/tree.h 00:21:17.903 TEST_HEADER include/spdk/ublk.h 00:21:17.903 TEST_HEADER include/spdk/util.h 00:21:17.903 TEST_HEADER include/spdk/uuid.h 00:21:17.903 TEST_HEADER include/spdk/version.h 00:21:17.903 TEST_HEADER include/spdk/vfio_user_pci.h 00:21:17.903 TEST_HEADER include/spdk/vfio_user_spec.h 00:21:17.903 TEST_HEADER include/spdk/vhost.h 00:21:17.903 TEST_HEADER include/spdk/vmd.h 00:21:17.903 TEST_HEADER include/spdk/xor.h 00:21:17.903 TEST_HEADER include/spdk/zipf.h 00:21:17.903 CXX test/cpp_headers/accel.o 00:21:17.903 LINK bdev_svc 00:21:17.903 LINK interrupt_tgt 00:21:17.903 LINK spdk_top 00:21:18.161 CXX test/cpp_headers/accel_module.o 00:21:18.161 CC examples/thread/thread/thread_ex.o 00:21:18.161 CC test/app/histogram_perf/histogram_perf.o 00:21:18.161 CC examples/sock/hello_world/hello_sock.o 00:21:18.161 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:21:18.161 CXX test/cpp_headers/assert.o 00:21:18.161 CC test/env/mem_callbacks/mem_callbacks.o 00:21:18.161 CXX test/cpp_headers/barrier.o 00:21:18.161 CXX test/cpp_headers/base64.o 00:21:18.161 LINK histogram_perf 00:21:18.421 CXX test/cpp_headers/bdev.o 00:21:18.421 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:21:18.421 CXX test/cpp_headers/bdev_module.o 00:21:18.421 LINK thread 00:21:18.421 LINK spdk_bdev 00:21:18.421 CXX test/cpp_headers/bdev_zone.o 00:21:18.421 LINK hello_sock 00:21:18.421 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:21:18.680 CXX test/cpp_headers/bit_array.o 00:21:18.680 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:21:18.680 CXX test/cpp_headers/bit_pool.o 00:21:18.680 CC test/env/vtophys/vtophys.o 00:21:18.680 LINK nvme_fuzz 00:21:18.680 LINK vtophys 00:21:18.680 CXX test/cpp_headers/blob_bdev.o 00:21:18.939 CC examples/accel/perf/accel_perf.o 00:21:18.939 CC examples/blob/hello_world/hello_blob.o 00:21:18.939 LINK mem_callbacks 00:21:18.939 CXX test/cpp_headers/blobfs_bdev.o 00:21:18.939 CC examples/blob/cli/blobcli.o 00:21:18.939 CC examples/nvme/hello_world/hello_world.o 00:21:18.939 CXX test/cpp_headers/blobfs.o 00:21:18.939 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:21:19.198 CC test/env/memory/memory_ut.o 00:21:19.198 CXX test/cpp_headers/blob.o 00:21:19.198 LINK vhost_fuzz 00:21:19.198 LINK hello_blob 00:21:19.198 LINK hello_world 00:21:19.198 LINK env_dpdk_post_init 00:21:19.198 CXX test/cpp_headers/conf.o 00:21:19.456 CC examples/fsdev/hello_world/hello_fsdev.o 00:21:19.456 CXX test/cpp_headers/config.o 00:21:19.456 CC examples/nvme/reconnect/reconnect.o 00:21:19.456 CC examples/nvme/nvme_manage/nvme_manage.o 00:21:19.456 LINK accel_perf 00:21:19.456 CXX test/cpp_headers/cpuset.o 00:21:19.456 LINK blobcli 00:21:19.456 CC examples/nvme/arbitration/arbitration.o 00:21:19.456 CC examples/nvme/hotplug/hotplug.o 00:21:19.715 CXX test/cpp_headers/crc16.o 00:21:19.715 LINK hello_fsdev 00:21:19.715 CC examples/nvme/cmb_copy/cmb_copy.o 00:21:19.715 CC examples/nvme/abort/abort.o 00:21:19.715 LINK hotplug 00:21:19.715 LINK reconnect 00:21:19.974 CXX test/cpp_headers/crc32.o 00:21:19.974 CXX test/cpp_headers/crc64.o 00:21:19.974 LINK arbitration 00:21:19.974 LINK cmb_copy 00:21:19.974 CXX test/cpp_headers/dif.o 00:21:19.974 CXX test/cpp_headers/dma.o 00:21:19.974 CXX test/cpp_headers/endian.o 00:21:20.233 LINK nvme_manage 00:21:20.233 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:21:20.233 CXX test/cpp_headers/env_dpdk.o 00:21:20.233 CC test/event/event_perf/event_perf.o 00:21:20.233 LINK abort 00:21:20.233 CC test/env/pci/pci_ut.o 00:21:20.492 CC test/rpc_client/rpc_client_test.o 00:21:20.492 LINK pmr_persistence 00:21:20.492 CXX test/cpp_headers/env.o 00:21:20.492 LINK event_perf 00:21:20.492 CC test/nvme/aer/aer.o 00:21:20.492 LINK memory_ut 00:21:20.492 CC test/accel/dif/dif.o 00:21:20.492 LINK iscsi_fuzz 00:21:20.492 CXX test/cpp_headers/event.o 00:21:20.751 LINK rpc_client_test 00:21:20.751 CC test/blobfs/mkfs/mkfs.o 00:21:20.751 CC test/event/reactor/reactor.o 00:21:20.751 LINK pci_ut 00:21:20.751 LINK aer 00:21:20.751 CC examples/bdev/hello_world/hello_bdev.o 00:21:20.751 CXX test/cpp_headers/fd_group.o 00:21:20.751 CC test/app/jsoncat/jsoncat.o 00:21:20.751 LINK reactor 00:21:20.751 LINK mkfs 00:21:21.050 CC examples/bdev/bdevperf/bdevperf.o 00:21:21.050 CXX test/cpp_headers/fd.o 00:21:21.050 LINK jsoncat 00:21:21.050 CC test/lvol/esnap/esnap.o 00:21:21.050 CXX test/cpp_headers/file.o 00:21:21.050 CC test/nvme/reset/reset.o 00:21:21.050 LINK hello_bdev 00:21:21.050 CC test/event/reactor_perf/reactor_perf.o 00:21:21.050 CC test/app/stub/stub.o 00:21:21.308 CXX test/cpp_headers/fsdev.o 00:21:21.308 LINK reactor_perf 00:21:21.308 CC test/nvme/sgl/sgl.o 00:21:21.308 CC test/event/app_repeat/app_repeat.o 00:21:21.308 LINK stub 00:21:21.308 CC test/nvme/e2edp/nvme_dp.o 00:21:21.308 LINK reset 00:21:21.567 CXX test/cpp_headers/fsdev_module.o 00:21:21.567 LINK dif 00:21:21.567 LINK app_repeat 00:21:21.567 CC test/nvme/overhead/overhead.o 00:21:21.567 LINK sgl 00:21:21.567 CC test/nvme/err_injection/err_injection.o 00:21:21.567 CXX test/cpp_headers/ftl.o 00:21:21.567 CC test/nvme/startup/startup.o 00:21:21.825 CC test/nvme/reserve/reserve.o 00:21:21.825 LINK nvme_dp 00:21:21.826 CC test/event/scheduler/scheduler.o 00:21:21.826 LINK overhead 00:21:21.826 LINK startup 00:21:21.826 LINK err_injection 00:21:21.826 CC test/nvme/simple_copy/simple_copy.o 00:21:21.826 CXX test/cpp_headers/fuse_dispatcher.o 00:21:22.084 LINK bdevperf 00:21:22.084 LINK reserve 00:21:22.084 CC test/nvme/connect_stress/connect_stress.o 00:21:22.084 CXX test/cpp_headers/gpt_spec.o 00:21:22.084 LINK scheduler 00:21:22.084 CXX test/cpp_headers/hexlify.o 00:21:22.084 CC test/nvme/boot_partition/boot_partition.o 00:21:22.084 CC test/nvme/compliance/nvme_compliance.o 00:21:22.084 LINK simple_copy 00:21:22.343 CC test/nvme/fused_ordering/fused_ordering.o 00:21:22.343 LINK connect_stress 00:21:22.343 CXX test/cpp_headers/histogram_data.o 00:21:22.343 CC test/nvme/doorbell_aers/doorbell_aers.o 00:21:22.343 LINK boot_partition 00:21:22.343 CC examples/nvmf/nvmf/nvmf.o 00:21:22.601 CC test/nvme/fdp/fdp.o 00:21:22.601 CXX test/cpp_headers/idxd.o 00:21:22.601 LINK doorbell_aers 00:21:22.601 CXX test/cpp_headers/idxd_spec.o 00:21:22.601 CC test/bdev/bdevio/bdevio.o 00:21:22.601 LINK fused_ordering 00:21:22.601 CC test/nvme/cuse/cuse.o 00:21:22.601 LINK nvme_compliance 00:21:22.907 CXX test/cpp_headers/init.o 00:21:22.907 CXX test/cpp_headers/ioat.o 00:21:22.907 CXX test/cpp_headers/ioat_spec.o 00:21:22.907 CXX test/cpp_headers/iscsi_spec.o 00:21:22.907 LINK nvmf 00:21:22.907 CXX test/cpp_headers/json.o 00:21:22.907 CXX test/cpp_headers/jsonrpc.o 00:21:22.907 CXX test/cpp_headers/keyring.o 00:21:22.907 LINK fdp 00:21:22.907 CXX test/cpp_headers/keyring_module.o 00:21:22.907 CXX test/cpp_headers/likely.o 00:21:22.907 CXX test/cpp_headers/log.o 00:21:22.907 CXX test/cpp_headers/lvol.o 00:21:23.166 LINK bdevio 00:21:23.166 CXX test/cpp_headers/md5.o 00:21:23.166 CXX test/cpp_headers/memory.o 00:21:23.166 CXX test/cpp_headers/mmio.o 00:21:23.166 CXX test/cpp_headers/nbd.o 00:21:23.166 CXX test/cpp_headers/net.o 00:21:23.166 CXX test/cpp_headers/notify.o 00:21:23.166 CXX test/cpp_headers/nvme.o 00:21:23.166 CXX test/cpp_headers/nvme_intel.o 00:21:23.166 CXX test/cpp_headers/nvme_ocssd.o 00:21:23.166 CXX test/cpp_headers/nvme_ocssd_spec.o 00:21:23.166 CXX test/cpp_headers/nvme_spec.o 00:21:23.424 CXX test/cpp_headers/nvme_zns.o 00:21:23.424 CXX test/cpp_headers/nvmf_cmd.o 00:21:23.424 CXX test/cpp_headers/nvmf_fc_spec.o 00:21:23.424 CXX test/cpp_headers/nvmf.o 00:21:23.424 CXX test/cpp_headers/nvmf_spec.o 00:21:23.424 CXX test/cpp_headers/nvmf_transport.o 00:21:23.424 CXX test/cpp_headers/opal.o 00:21:23.424 CXX test/cpp_headers/opal_spec.o 00:21:23.424 CXX test/cpp_headers/pci_ids.o 00:21:23.424 CXX test/cpp_headers/pipe.o 00:21:23.683 CXX test/cpp_headers/queue.o 00:21:23.683 CXX test/cpp_headers/reduce.o 00:21:23.683 CXX test/cpp_headers/rpc.o 00:21:23.683 CXX test/cpp_headers/scheduler.o 00:21:23.683 CXX test/cpp_headers/scsi.o 00:21:23.683 CXX test/cpp_headers/scsi_spec.o 00:21:23.683 CXX test/cpp_headers/sock.o 00:21:23.683 CXX test/cpp_headers/stdinc.o 00:21:23.683 CXX test/cpp_headers/string.o 00:21:23.683 CXX test/cpp_headers/thread.o 00:21:23.683 CXX test/cpp_headers/trace.o 00:21:23.941 CXX test/cpp_headers/trace_parser.o 00:21:23.941 CXX test/cpp_headers/tree.o 00:21:23.941 CXX test/cpp_headers/ublk.o 00:21:23.941 CXX test/cpp_headers/util.o 00:21:23.941 CXX test/cpp_headers/uuid.o 00:21:23.941 CXX test/cpp_headers/version.o 00:21:23.941 CXX test/cpp_headers/vfio_user_pci.o 00:21:23.941 CXX test/cpp_headers/vfio_user_spec.o 00:21:23.941 CXX test/cpp_headers/vhost.o 00:21:23.941 CXX test/cpp_headers/vmd.o 00:21:23.941 CXX test/cpp_headers/xor.o 00:21:23.941 CXX test/cpp_headers/zipf.o 00:21:24.227 LINK cuse 00:21:28.415 LINK esnap 00:21:28.415 00:21:28.415 real 1m34.316s 00:21:28.415 user 7m32.841s 00:21:28.415 sys 1m17.544s 00:21:28.415 ************************************ 00:21:28.415 END TEST make 00:21:28.415 ************************************ 00:21:28.415 13:32:42 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:21:28.415 13:32:42 make -- common/autotest_common.sh@10 -- $ set +x 00:21:28.415 13:32:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:21:28.415 13:32:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:21:28.415 13:32:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:21:28.415 13:32:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:28.415 13:32:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:21:28.415 13:32:42 -- pm/common@44 -- $ pid=6013 00:21:28.415 13:32:42 -- pm/common@50 -- $ kill -TERM 6013 00:21:28.415 13:32:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:28.415 13:32:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:21:28.415 13:32:42 -- pm/common@44 -- $ pid=6015 00:21:28.415 13:32:42 -- pm/common@50 -- $ kill -TERM 6015 00:21:28.415 13:32:42 -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:21:28.415 13:32:42 -- common/autotest_common.sh@1689 -- # lcov --version 00:21:28.415 13:32:42 -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:21:28.674 13:32:42 -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:21:28.674 13:32:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:28.674 13:32:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:28.674 13:32:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:28.674 13:32:42 -- scripts/common.sh@336 -- # IFS=.-: 00:21:28.674 13:32:42 -- scripts/common.sh@336 -- # read -ra ver1 00:21:28.674 13:32:42 -- scripts/common.sh@337 -- # IFS=.-: 00:21:28.674 13:32:42 -- scripts/common.sh@337 -- # read -ra ver2 00:21:28.674 13:32:42 -- scripts/common.sh@338 -- # local 'op=<' 00:21:28.674 13:32:42 -- scripts/common.sh@340 -- # ver1_l=2 00:21:28.674 13:32:42 -- scripts/common.sh@341 -- # ver2_l=1 00:21:28.674 13:32:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:28.674 13:32:42 -- scripts/common.sh@344 -- # case "$op" in 00:21:28.674 13:32:42 -- scripts/common.sh@345 -- # : 1 00:21:28.674 13:32:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:28.674 13:32:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.674 13:32:42 -- scripts/common.sh@365 -- # decimal 1 00:21:28.674 13:32:42 -- scripts/common.sh@353 -- # local d=1 00:21:28.674 13:32:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:28.674 13:32:42 -- scripts/common.sh@355 -- # echo 1 00:21:28.674 13:32:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:21:28.674 13:32:42 -- scripts/common.sh@366 -- # decimal 2 00:21:28.674 13:32:42 -- scripts/common.sh@353 -- # local d=2 00:21:28.674 13:32:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:28.674 13:32:42 -- scripts/common.sh@355 -- # echo 2 00:21:28.674 13:32:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:21:28.674 13:32:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:28.674 13:32:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:28.674 13:32:42 -- scripts/common.sh@368 -- # return 0 00:21:28.674 13:32:42 -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:28.674 13:32:42 -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:21:28.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.674 --rc genhtml_branch_coverage=1 00:21:28.674 --rc genhtml_function_coverage=1 00:21:28.674 --rc genhtml_legend=1 00:21:28.674 --rc geninfo_all_blocks=1 00:21:28.675 --rc geninfo_unexecuted_blocks=1 00:21:28.675 00:21:28.675 ' 00:21:28.675 13:32:42 -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:21:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.675 --rc genhtml_branch_coverage=1 00:21:28.675 --rc genhtml_function_coverage=1 00:21:28.675 --rc genhtml_legend=1 00:21:28.675 --rc geninfo_all_blocks=1 00:21:28.675 --rc geninfo_unexecuted_blocks=1 00:21:28.675 00:21:28.675 ' 00:21:28.675 13:32:42 -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:21:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.675 --rc genhtml_branch_coverage=1 00:21:28.675 --rc genhtml_function_coverage=1 00:21:28.675 --rc genhtml_legend=1 00:21:28.675 --rc geninfo_all_blocks=1 00:21:28.675 --rc geninfo_unexecuted_blocks=1 00:21:28.675 00:21:28.675 ' 00:21:28.675 13:32:42 -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:21:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.675 --rc genhtml_branch_coverage=1 00:21:28.675 --rc genhtml_function_coverage=1 00:21:28.675 --rc genhtml_legend=1 00:21:28.675 --rc geninfo_all_blocks=1 00:21:28.675 --rc geninfo_unexecuted_blocks=1 00:21:28.675 00:21:28.675 ' 00:21:28.675 13:32:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:28.675 13:32:42 -- nvmf/common.sh@7 -- # uname -s 00:21:28.675 13:32:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.675 13:32:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.675 13:32:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.675 13:32:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.675 13:32:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.675 13:32:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.675 13:32:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.675 13:32:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.675 13:32:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.675 13:32:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.675 13:32:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:84390273-455e-4de1-ba26-b651941d9928 00:21:28.675 13:32:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=84390273-455e-4de1-ba26-b651941d9928 00:21:28.675 13:32:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.675 13:32:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.675 13:32:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:28.675 13:32:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.675 13:32:42 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:28.675 13:32:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:21:28.675 13:32:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.675 13:32:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.675 13:32:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.675 13:32:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.675 13:32:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.675 13:32:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.675 13:32:42 -- paths/export.sh@5 -- # export PATH 00:21:28.675 13:32:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.675 13:32:42 -- nvmf/common.sh@51 -- # : 0 00:21:28.675 13:32:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:28.675 13:32:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:28.675 13:32:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.675 13:32:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.675 13:32:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.675 13:32:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:28.675 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:28.675 13:32:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:28.675 13:32:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:28.675 13:32:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:28.675 13:32:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:21:28.675 13:32:42 -- spdk/autotest.sh@32 -- # uname -s 00:21:28.675 13:32:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:21:28.675 13:32:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:21:28.675 13:32:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:21:28.675 13:32:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:21:28.675 13:32:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:21:28.675 13:32:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:21:28.675 13:32:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:21:28.675 13:32:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:21:28.675 13:32:42 -- spdk/autotest.sh@48 -- # udevadm_pid=68101 00:21:28.675 13:32:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:21:28.675 13:32:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:21:28.675 13:32:42 -- pm/common@17 -- # local monitor 00:21:28.675 13:32:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:21:28.675 13:32:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:21:28.675 13:32:42 -- pm/common@21 -- # date +%s 00:21:28.675 13:32:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730122362 00:21:28.675 13:32:42 -- pm/common@25 -- # sleep 1 00:21:28.675 13:32:42 -- pm/common@21 -- # date +%s 00:21:28.675 13:32:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730122362 00:21:28.675 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730122362_collect-vmstat.pm.log 00:21:28.675 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730122362_collect-cpu-load.pm.log 00:21:29.610 13:32:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:21:29.610 13:32:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:21:29.610 13:32:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:29.610 13:32:43 -- common/autotest_common.sh@10 -- # set +x 00:21:29.610 13:32:43 -- spdk/autotest.sh@59 -- # create_test_list 00:21:29.610 13:32:43 -- common/autotest_common.sh@748 -- # xtrace_disable 00:21:29.610 13:32:43 -- common/autotest_common.sh@10 -- # set +x 00:21:29.868 13:32:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:21:29.868 13:32:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:21:29.868 13:32:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:21:29.868 13:32:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:21:29.868 13:32:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:21:29.868 13:32:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:21:29.868 13:32:43 -- common/autotest_common.sh@1453 -- # uname 00:21:29.868 13:32:43 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:21:29.868 13:32:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:21:29.868 13:32:43 -- common/autotest_common.sh@1473 -- # uname 00:21:29.868 13:32:43 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:21:29.868 13:32:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:21:29.868 13:32:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:21:29.868 lcov: LCOV version 1.15 00:21:29.868 13:32:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:21:47.950 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:21:47.950 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:22:06.076 13:33:18 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:22:06.076 13:33:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.076 13:33:18 -- common/autotest_common.sh@10 -- # set +x 00:22:06.076 13:33:18 -- spdk/autotest.sh@78 -- # rm -f 00:22:06.076 13:33:18 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:06.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:06.076 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:06.076 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:06.076 13:33:18 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:22:06.076 13:33:18 -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:22:06.076 13:33:18 -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:22:06.076 13:33:18 -- common/autotest_common.sh@1654 -- # local nvme bdf 00:22:06.076 13:33:18 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:22:06.076 13:33:18 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:22:06.076 13:33:18 -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:22:06.076 13:33:18 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:06.076 13:33:18 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:22:06.076 13:33:18 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:22:06.076 13:33:18 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n1 00:22:06.076 13:33:18 -- common/autotest_common.sh@1646 -- # local device=nvme1n1 00:22:06.076 13:33:18 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:06.076 13:33:18 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:22:06.076 13:33:18 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:22:06.076 13:33:18 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n2 00:22:06.076 13:33:18 -- common/autotest_common.sh@1646 -- # local device=nvme1n2 00:22:06.076 13:33:18 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:22:06.076 13:33:18 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:22:06.076 13:33:18 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:22:06.076 13:33:18 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n3 00:22:06.076 13:33:18 -- common/autotest_common.sh@1646 -- # local device=nvme1n3 00:22:06.076 13:33:18 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:22:06.076 13:33:18 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:22:06.076 13:33:18 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:22:06.076 13:33:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:22:06.076 13:33:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:22:06.076 13:33:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:22:06.076 13:33:18 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:22:06.076 13:33:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:22:06.076 No valid GPT data, bailing 00:22:06.076 13:33:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:06.076 13:33:19 -- scripts/common.sh@394 -- # pt= 00:22:06.076 13:33:19 -- scripts/common.sh@395 -- # return 1 00:22:06.076 13:33:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:22:06.076 1+0 records in 00:22:06.076 1+0 records out 00:22:06.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502835 s, 209 MB/s 00:22:06.076 13:33:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:22:06.076 13:33:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:22:06.076 13:33:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:22:06.076 13:33:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:22:06.076 13:33:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:22:06.076 No valid GPT data, bailing 00:22:06.076 13:33:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:06.076 13:33:19 -- scripts/common.sh@394 -- # pt= 00:22:06.076 13:33:19 -- scripts/common.sh@395 -- # return 1 00:22:06.076 13:33:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:22:06.076 1+0 records in 00:22:06.076 1+0 records out 00:22:06.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465981 s, 225 MB/s 00:22:06.076 13:33:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:22:06.076 13:33:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:22:06.076 13:33:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:22:06.076 13:33:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:22:06.076 13:33:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:22:06.076 No valid GPT data, bailing 00:22:06.076 13:33:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:22:06.076 13:33:19 -- scripts/common.sh@394 -- # pt= 00:22:06.076 13:33:19 -- scripts/common.sh@395 -- # return 1 00:22:06.076 13:33:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:22:06.076 1+0 records in 00:22:06.076 1+0 records out 00:22:06.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495812 s, 211 MB/s 00:22:06.076 13:33:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:22:06.076 13:33:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:22:06.076 13:33:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:22:06.076 13:33:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:22:06.076 13:33:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:22:06.076 No valid GPT data, bailing 00:22:06.076 13:33:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:22:06.076 13:33:19 -- scripts/common.sh@394 -- # pt= 00:22:06.076 13:33:19 -- scripts/common.sh@395 -- # return 1 00:22:06.076 13:33:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:22:06.076 1+0 records in 00:22:06.076 1+0 records out 00:22:06.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450026 s, 233 MB/s 00:22:06.076 13:33:19 -- spdk/autotest.sh@105 -- # sync 00:22:06.076 13:33:19 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:22:06.076 13:33:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:22:06.076 13:33:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:22:07.452 13:33:21 -- spdk/autotest.sh@111 -- # uname -s 00:22:07.452 13:33:21 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:22:07.452 13:33:21 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:22:07.452 13:33:21 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:22:08.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:08.019 Hugepages 00:22:08.019 node hugesize free / total 00:22:08.019 node0 1048576kB 0 / 0 00:22:08.019 node0 2048kB 0 / 0 00:22:08.019 00:22:08.019 Type BDF Vendor Device NUMA Driver Device Block devices 00:22:08.019 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:22:08.332 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:22:08.333 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:22:08.333 13:33:22 -- spdk/autotest.sh@117 -- # uname -s 00:22:08.333 13:33:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:22:08.333 13:33:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:22:08.333 13:33:22 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:08.946 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:08.946 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:09.206 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:09.206 13:33:23 -- common/autotest_common.sh@1513 -- # sleep 1 00:22:10.142 13:33:24 -- common/autotest_common.sh@1514 -- # bdfs=() 00:22:10.142 13:33:24 -- common/autotest_common.sh@1514 -- # local bdfs 00:22:10.142 13:33:24 -- common/autotest_common.sh@1516 -- # bdfs=($(get_nvme_bdfs)) 00:22:10.142 13:33:24 -- common/autotest_common.sh@1516 -- # get_nvme_bdfs 00:22:10.142 13:33:24 -- common/autotest_common.sh@1494 -- # bdfs=() 00:22:10.142 13:33:24 -- common/autotest_common.sh@1494 -- # local bdfs 00:22:10.142 13:33:24 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:10.142 13:33:24 -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:10.142 13:33:24 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:22:10.142 13:33:24 -- common/autotest_common.sh@1496 -- # (( 2 == 0 )) 00:22:10.142 13:33:24 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:10.142 13:33:24 -- common/autotest_common.sh@1518 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:10.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:10.710 Waiting for block devices as requested 00:22:10.710 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:10.710 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:10.710 13:33:24 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:22:10.710 13:33:24 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:22:10.710 13:33:24 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:22:10.710 13:33:24 -- common/autotest_common.sh@1483 -- # grep 0000:00:10.0/nvme/nvme 00:22:10.710 13:33:24 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:22:10.710 13:33:24 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:22:10.710 13:33:24 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:22:10.710 13:33:24 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme1 00:22:10.710 13:33:24 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme1 00:22:10.710 13:33:24 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme1 ]] 00:22:10.710 13:33:24 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme1 00:22:10.710 13:33:24 -- common/autotest_common.sh@1527 -- # grep oacs 00:22:10.710 13:33:24 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:22:10.710 13:33:24 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:22:10.710 13:33:24 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:22:10.710 13:33:24 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:22:10.710 13:33:24 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme1 00:22:10.710 13:33:24 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:22:10.710 13:33:24 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:22:10.710 13:33:24 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:22:10.710 13:33:24 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:22:10.710 13:33:24 -- common/autotest_common.sh@1539 -- # continue 00:22:10.710 13:33:24 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:22:10.710 13:33:24 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:22:10.710 13:33:24 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:22:10.710 13:33:24 -- common/autotest_common.sh@1483 -- # grep 0000:00:11.0/nvme/nvme 00:22:10.710 13:33:24 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:22:10.710 13:33:24 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:22:10.710 13:33:24 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:22:10.970 13:33:24 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme0 00:22:10.970 13:33:24 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme0 00:22:10.970 13:33:24 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme0 ]] 00:22:10.970 13:33:24 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme0 00:22:10.970 13:33:24 -- common/autotest_common.sh@1527 -- # grep oacs 00:22:10.970 13:33:24 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:22:10.970 13:33:24 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:22:10.970 13:33:24 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:22:10.970 13:33:24 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:22:10.970 13:33:24 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme0 00:22:10.970 13:33:24 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:22:10.970 13:33:24 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:22:10.970 13:33:24 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:22:10.970 13:33:24 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:22:10.970 13:33:24 -- common/autotest_common.sh@1539 -- # continue 00:22:10.970 13:33:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:22:10.970 13:33:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.970 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:22:10.970 13:33:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:22:10.970 13:33:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:10.970 13:33:24 -- common/autotest_common.sh@10 -- # set +x 00:22:10.970 13:33:24 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:11.537 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:11.537 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:11.795 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:11.795 13:33:25 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:22:11.795 13:33:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:11.795 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:22:11.795 13:33:25 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:22:11.795 13:33:25 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:22:11.795 13:33:25 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:22:11.795 13:33:25 -- common/autotest_common.sh@1559 -- # bdfs=() 00:22:11.795 13:33:25 -- common/autotest_common.sh@1559 -- # _bdfs=() 00:22:11.795 13:33:25 -- common/autotest_common.sh@1559 -- # local bdfs _bdfs 00:22:11.795 13:33:25 -- common/autotest_common.sh@1560 -- # _bdfs=($(get_nvme_bdfs)) 00:22:11.795 13:33:25 -- common/autotest_common.sh@1560 -- # get_nvme_bdfs 00:22:11.795 13:33:25 -- common/autotest_common.sh@1494 -- # bdfs=() 00:22:11.795 13:33:25 -- common/autotest_common.sh@1494 -- # local bdfs 00:22:11.795 13:33:25 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:11.795 13:33:25 -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:11.795 13:33:25 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:22:11.795 13:33:25 -- common/autotest_common.sh@1496 -- # (( 2 == 0 )) 00:22:11.795 13:33:25 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:11.795 13:33:25 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:22:11.795 13:33:25 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:22:11.795 13:33:25 -- common/autotest_common.sh@1562 -- # device=0x0010 00:22:11.795 13:33:25 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:22:11.795 13:33:25 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:22:11.796 13:33:25 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:22:11.796 13:33:25 -- common/autotest_common.sh@1562 -- # device=0x0010 00:22:11.796 13:33:25 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:22:11.796 13:33:25 -- common/autotest_common.sh@1568 -- # (( 0 > 0 )) 00:22:11.796 13:33:25 -- common/autotest_common.sh@1568 -- # return 0 00:22:11.796 13:33:25 -- common/autotest_common.sh@1575 -- # [[ -z '' ]] 00:22:11.796 13:33:25 -- common/autotest_common.sh@1576 -- # return 0 00:22:11.796 13:33:25 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:22:11.796 13:33:25 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:22:11.796 13:33:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:22:11.796 13:33:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:22:11.796 13:33:25 -- spdk/autotest.sh@149 -- # timing_enter lib 00:22:11.796 13:33:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:11.796 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:22:11.796 13:33:25 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:22:11.796 13:33:25 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:22:11.796 13:33:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:11.796 13:33:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:11.796 13:33:25 -- common/autotest_common.sh@10 -- # set +x 00:22:11.796 ************************************ 00:22:11.796 START TEST env 00:22:11.796 ************************************ 00:22:11.796 13:33:25 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:22:12.054 * Looking for test storage... 00:22:12.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:22:12.054 13:33:26 env -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:12.054 13:33:26 env -- common/autotest_common.sh@1689 -- # lcov --version 00:22:12.054 13:33:26 env -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:12.054 13:33:26 env -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:12.054 13:33:26 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.054 13:33:26 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.054 13:33:26 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.054 13:33:26 env -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.054 13:33:26 env -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.054 13:33:26 env -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.054 13:33:26 env -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.054 13:33:26 env -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.054 13:33:26 env -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.054 13:33:26 env -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.054 13:33:26 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.054 13:33:26 env -- scripts/common.sh@344 -- # case "$op" in 00:22:12.054 13:33:26 env -- scripts/common.sh@345 -- # : 1 00:22:12.054 13:33:26 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.054 13:33:26 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.054 13:33:26 env -- scripts/common.sh@365 -- # decimal 1 00:22:12.054 13:33:26 env -- scripts/common.sh@353 -- # local d=1 00:22:12.054 13:33:26 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.054 13:33:26 env -- scripts/common.sh@355 -- # echo 1 00:22:12.054 13:33:26 env -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.054 13:33:26 env -- scripts/common.sh@366 -- # decimal 2 00:22:12.054 13:33:26 env -- scripts/common.sh@353 -- # local d=2 00:22:12.054 13:33:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.054 13:33:26 env -- scripts/common.sh@355 -- # echo 2 00:22:12.054 13:33:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.054 13:33:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.054 13:33:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.054 13:33:26 env -- scripts/common.sh@368 -- # return 0 00:22:12.054 13:33:26 env -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.054 13:33:26 env -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:12.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.054 --rc genhtml_branch_coverage=1 00:22:12.054 --rc genhtml_function_coverage=1 00:22:12.054 --rc genhtml_legend=1 00:22:12.054 --rc geninfo_all_blocks=1 00:22:12.054 --rc geninfo_unexecuted_blocks=1 00:22:12.054 00:22:12.054 ' 00:22:12.054 13:33:26 env -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:12.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.054 --rc genhtml_branch_coverage=1 00:22:12.054 --rc genhtml_function_coverage=1 00:22:12.054 --rc genhtml_legend=1 00:22:12.054 --rc geninfo_all_blocks=1 00:22:12.054 --rc geninfo_unexecuted_blocks=1 00:22:12.054 00:22:12.054 ' 00:22:12.054 13:33:26 env -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:12.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.054 --rc genhtml_branch_coverage=1 00:22:12.054 --rc genhtml_function_coverage=1 00:22:12.054 --rc genhtml_legend=1 00:22:12.054 --rc geninfo_all_blocks=1 00:22:12.054 --rc geninfo_unexecuted_blocks=1 00:22:12.054 00:22:12.054 ' 00:22:12.054 13:33:26 env -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:12.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.054 --rc genhtml_branch_coverage=1 00:22:12.054 --rc genhtml_function_coverage=1 00:22:12.054 --rc genhtml_legend=1 00:22:12.054 --rc geninfo_all_blocks=1 00:22:12.054 --rc geninfo_unexecuted_blocks=1 00:22:12.054 00:22:12.054 ' 00:22:12.054 13:33:26 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:22:12.054 13:33:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:12.054 13:33:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:12.054 13:33:26 env -- common/autotest_common.sh@10 -- # set +x 00:22:12.054 ************************************ 00:22:12.054 START TEST env_memory 00:22:12.054 ************************************ 00:22:12.054 13:33:26 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:22:12.054 00:22:12.054 00:22:12.054 CUnit - A unit testing framework for C - Version 2.1-3 00:22:12.054 http://cunit.sourceforge.net/ 00:22:12.054 00:22:12.055 00:22:12.055 Suite: memory 00:22:12.313 Test: alloc and free memory map ...[2024-10-28 13:33:26.225519] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:22:12.313 passed 00:22:12.313 Test: mem map translation ...[2024-10-28 13:33:26.285497] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:22:12.314 [2024-10-28 13:33:26.285807] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:22:12.314 [2024-10-28 13:33:26.286119] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:22:12.314 [2024-10-28 13:33:26.286343] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:22:12.314 passed 00:22:12.314 Test: mem map registration ...[2024-10-28 13:33:26.383827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:22:12.314 [2024-10-28 13:33:26.383976] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:22:12.314 passed 00:22:12.573 Test: mem map adjacent registrations ...passed 00:22:12.573 00:22:12.573 Run Summary: Type Total Ran Passed Failed Inactive 00:22:12.573 suites 1 1 n/a 0 0 00:22:12.573 tests 4 4 4 0 0 00:22:12.573 asserts 152 152 152 0 n/a 00:22:12.573 00:22:12.573 Elapsed time = 0.339 seconds 00:22:12.573 00:22:12.573 real 0m0.378s 00:22:12.573 user 0m0.339s 00:22:12.573 sys 0m0.030s 00:22:12.573 ************************************ 00:22:12.573 END TEST env_memory 00:22:12.573 ************************************ 00:22:12.573 13:33:26 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:12.573 13:33:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:22:12.573 13:33:26 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:22:12.573 13:33:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:12.573 13:33:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:12.573 13:33:26 env -- common/autotest_common.sh@10 -- # set +x 00:22:12.573 ************************************ 00:22:12.573 START TEST env_vtophys 00:22:12.573 ************************************ 00:22:12.573 13:33:26 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:22:12.573 EAL: lib.eal log level changed from notice to debug 00:22:12.573 EAL: Detected lcore 0 as core 0 on socket 0 00:22:12.573 EAL: Detected lcore 1 as core 0 on socket 0 00:22:12.573 EAL: Detected lcore 2 as core 0 on socket 0 00:22:12.573 EAL: Detected lcore 3 as core 0 on socket 0 00:22:12.573 EAL: Detected lcore 4 as core 0 on socket 0 00:22:12.573 EAL: Detected lcore 5 as core 0 on socket 0 00:22:12.573 EAL: Detected lcore 6 as core 0 on socket 0 00:22:12.573 EAL: Detected lcore 7 as core 0 on socket 0 00:22:12.573 EAL: Detected lcore 8 as core 0 on socket 0 00:22:12.573 EAL: Detected lcore 9 as core 0 on socket 0 00:22:12.573 EAL: Maximum logical cores by configuration: 128 00:22:12.573 EAL: Detected CPU lcores: 10 00:22:12.573 EAL: Detected NUMA nodes: 1 00:22:12.573 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:22:12.573 EAL: Detected shared linkage of DPDK 00:22:12.573 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:22:12.573 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:22:12.573 EAL: Registered [vdev] bus. 00:22:12.573 EAL: bus.vdev log level changed from disabled to notice 00:22:12.573 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:22:12.573 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:22:12.573 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:22:12.573 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:22:12.573 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:22:12.573 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:22:12.573 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:22:12.573 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:22:12.573 EAL: No shared files mode enabled, IPC will be disabled 00:22:12.573 EAL: No shared files mode enabled, IPC is disabled 00:22:12.573 EAL: Selected IOVA mode 'PA' 00:22:12.573 EAL: Probing VFIO support... 00:22:12.573 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:22:12.573 EAL: VFIO modules not loaded, skipping VFIO support... 00:22:12.573 EAL: Ask a virtual area of 0x2e000 bytes 00:22:12.573 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:22:12.573 EAL: Setting up physically contiguous memory... 00:22:12.573 EAL: Setting maximum number of open files to 524288 00:22:12.573 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:22:12.573 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:22:12.573 EAL: Ask a virtual area of 0x61000 bytes 00:22:12.573 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:22:12.573 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:12.573 EAL: Ask a virtual area of 0x400000000 bytes 00:22:12.573 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:22:12.573 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:22:12.573 EAL: Ask a virtual area of 0x61000 bytes 00:22:12.573 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:22:12.573 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:12.573 EAL: Ask a virtual area of 0x400000000 bytes 00:22:12.573 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:22:12.573 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:22:12.573 EAL: Ask a virtual area of 0x61000 bytes 00:22:12.573 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:22:12.573 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:12.573 EAL: Ask a virtual area of 0x400000000 bytes 00:22:12.573 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:22:12.573 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:22:12.573 EAL: Ask a virtual area of 0x61000 bytes 00:22:12.573 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:22:12.573 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:22:12.573 EAL: Ask a virtual area of 0x400000000 bytes 00:22:12.573 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:22:12.573 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:22:12.573 EAL: Hugepages will be freed exactly as allocated. 00:22:12.573 EAL: No shared files mode enabled, IPC is disabled 00:22:12.573 EAL: No shared files mode enabled, IPC is disabled 00:22:12.832 EAL: TSC frequency is ~2200000 KHz 00:22:12.832 EAL: Main lcore 0 is ready (tid=7f10341d9a40;cpuset=[0]) 00:22:12.832 EAL: Trying to obtain current memory policy. 00:22:12.832 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:12.832 EAL: Restoring previous memory policy: 0 00:22:12.832 EAL: request: mp_malloc_sync 00:22:12.832 EAL: No shared files mode enabled, IPC is disabled 00:22:12.832 EAL: Heap on socket 0 was expanded by 2MB 00:22:12.832 EAL: No shared files mode enabled, IPC is disabled 00:22:12.832 EAL: Mem event callback 'spdk:(nil)' registered 00:22:12.832 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:22:12.832 00:22:12.832 00:22:12.832 CUnit - A unit testing framework for C - Version 2.1-3 00:22:12.832 http://cunit.sourceforge.net/ 00:22:12.832 00:22:12.832 00:22:12.832 Suite: components_suite 00:22:13.401 Test: vtophys_malloc_test ...passed 00:22:13.401 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:22:13.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:13.401 EAL: Restoring previous memory policy: 4 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was expanded by 4MB 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was shrunk by 4MB 00:22:13.401 EAL: Trying to obtain current memory policy. 00:22:13.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:13.401 EAL: Restoring previous memory policy: 4 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was expanded by 6MB 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was shrunk by 6MB 00:22:13.401 EAL: Trying to obtain current memory policy. 00:22:13.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:13.401 EAL: Restoring previous memory policy: 4 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was expanded by 10MB 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was shrunk by 10MB 00:22:13.401 EAL: Trying to obtain current memory policy. 00:22:13.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:13.401 EAL: Restoring previous memory policy: 4 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was expanded by 18MB 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was shrunk by 18MB 00:22:13.401 EAL: Trying to obtain current memory policy. 00:22:13.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:13.401 EAL: Restoring previous memory policy: 4 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was expanded by 34MB 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was shrunk by 34MB 00:22:13.401 EAL: Trying to obtain current memory policy. 00:22:13.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:13.401 EAL: Restoring previous memory policy: 4 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was expanded by 66MB 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was shrunk by 66MB 00:22:13.401 EAL: Trying to obtain current memory policy. 00:22:13.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:13.401 EAL: Restoring previous memory policy: 4 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was expanded by 130MB 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was shrunk by 130MB 00:22:13.401 EAL: Trying to obtain current memory policy. 00:22:13.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:13.401 EAL: Restoring previous memory policy: 4 00:22:13.401 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.401 EAL: request: mp_malloc_sync 00:22:13.401 EAL: No shared files mode enabled, IPC is disabled 00:22:13.401 EAL: Heap on socket 0 was expanded by 258MB 00:22:13.660 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.660 EAL: request: mp_malloc_sync 00:22:13.660 EAL: No shared files mode enabled, IPC is disabled 00:22:13.660 EAL: Heap on socket 0 was shrunk by 258MB 00:22:13.660 EAL: Trying to obtain current memory policy. 00:22:13.660 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:13.660 EAL: Restoring previous memory policy: 4 00:22:13.660 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.660 EAL: request: mp_malloc_sync 00:22:13.661 EAL: No shared files mode enabled, IPC is disabled 00:22:13.661 EAL: Heap on socket 0 was expanded by 514MB 00:22:13.919 EAL: Calling mem event callback 'spdk:(nil)' 00:22:13.919 EAL: request: mp_malloc_sync 00:22:13.919 EAL: No shared files mode enabled, IPC is disabled 00:22:13.919 EAL: Heap on socket 0 was shrunk by 514MB 00:22:13.919 EAL: Trying to obtain current memory policy. 00:22:13.919 EAL: Setting policy MPOL_PREFERRED for socket 0 00:22:14.178 EAL: Restoring previous memory policy: 4 00:22:14.178 EAL: Calling mem event callback 'spdk:(nil)' 00:22:14.178 EAL: request: mp_malloc_sync 00:22:14.178 EAL: No shared files mode enabled, IPC is disabled 00:22:14.178 EAL: Heap on socket 0 was expanded by 1026MB 00:22:14.436 EAL: Calling mem event callback 'spdk:(nil)' 00:22:14.694 passed 00:22:14.694 00:22:14.694 Run Summary: Type Total Ran Passed Failed Inactive 00:22:14.694 suites 1 1 n/a 0 0 00:22:14.694 tests 2 2 2 0 0 00:22:14.694 asserts 5484 5484 5484 0 n/a 00:22:14.694 00:22:14.694 Elapsed time = 1.866 seconds 00:22:14.694 EAL: request: mp_malloc_sync 00:22:14.694 EAL: No shared files mode enabled, IPC is disabled 00:22:14.694 EAL: Heap on socket 0 was shrunk by 1026MB 00:22:14.694 EAL: Calling mem event callback 'spdk:(nil)' 00:22:14.694 EAL: request: mp_malloc_sync 00:22:14.694 EAL: No shared files mode enabled, IPC is disabled 00:22:14.694 EAL: Heap on socket 0 was shrunk by 2MB 00:22:14.694 EAL: No shared files mode enabled, IPC is disabled 00:22:14.694 EAL: No shared files mode enabled, IPC is disabled 00:22:14.694 EAL: No shared files mode enabled, IPC is disabled 00:22:14.694 ************************************ 00:22:14.694 END TEST env_vtophys 00:22:14.694 ************************************ 00:22:14.694 00:22:14.694 real 0m2.178s 00:22:14.694 user 0m1.025s 00:22:14.694 sys 0m1.000s 00:22:14.694 13:33:28 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:14.694 13:33:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:22:14.694 13:33:28 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:22:14.694 13:33:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:14.694 13:33:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:14.694 13:33:28 env -- common/autotest_common.sh@10 -- # set +x 00:22:14.694 ************************************ 00:22:14.694 START TEST env_pci 00:22:14.694 ************************************ 00:22:14.694 13:33:28 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:22:14.694 00:22:14.694 00:22:14.694 CUnit - A unit testing framework for C - Version 2.1-3 00:22:14.694 http://cunit.sourceforge.net/ 00:22:14.694 00:22:14.694 00:22:14.694 Suite: pci 00:22:14.694 Test: pci_hook ...[2024-10-28 13:33:28.841797] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70381 has claimed it 00:22:14.954 passed 00:22:14.954 00:22:14.954 Run Summary: Type Total Ran Passed Failed Inactive 00:22:14.954 suites 1 1 n/a 0 0 00:22:14.954 tests 1 1 1 0 0 00:22:14.954 asserts 25 25 25 0 n/a 00:22:14.954 00:22:14.954 Elapsed time = 0.007 seconds 00:22:14.954 EAL: Cannot find device (10000:00:01.0) 00:22:14.954 EAL: Failed to attach device on primary process 00:22:14.954 00:22:14.954 real 0m0.067s 00:22:14.954 user 0m0.030s 00:22:14.954 sys 0m0.037s 00:22:14.954 13:33:28 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:14.954 ************************************ 00:22:14.954 13:33:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:22:14.954 END TEST env_pci 00:22:14.954 ************************************ 00:22:14.954 13:33:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:22:14.954 13:33:28 env -- env/env.sh@15 -- # uname 00:22:14.954 13:33:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:22:14.954 13:33:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:22:14.954 13:33:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:22:14.954 13:33:28 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:14.954 13:33:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:14.954 13:33:28 env -- common/autotest_common.sh@10 -- # set +x 00:22:14.954 ************************************ 00:22:14.954 START TEST env_dpdk_post_init 00:22:14.954 ************************************ 00:22:14.954 13:33:28 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:22:14.954 EAL: Detected CPU lcores: 10 00:22:14.954 EAL: Detected NUMA nodes: 1 00:22:14.954 EAL: Detected shared linkage of DPDK 00:22:14.954 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:22:14.954 EAL: Selected IOVA mode 'PA' 00:22:15.213 Starting DPDK initialization... 00:22:15.213 Starting SPDK post initialization... 00:22:15.213 SPDK NVMe probe 00:22:15.213 Attaching to 0000:00:10.0 00:22:15.213 Attaching to 0000:00:11.0 00:22:15.213 Attached to 0000:00:10.0 00:22:15.213 Attached to 0000:00:11.0 00:22:15.213 Cleaning up... 00:22:15.213 00:22:15.213 real 0m0.266s 00:22:15.213 user 0m0.082s 00:22:15.213 sys 0m0.084s 00:22:15.213 13:33:29 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:15.213 ************************************ 00:22:15.213 13:33:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.213 END TEST env_dpdk_post_init 00:22:15.213 ************************************ 00:22:15.213 13:33:29 env -- env/env.sh@26 -- # uname 00:22:15.213 13:33:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:22:15.214 13:33:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:22:15.214 13:33:29 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:15.214 13:33:29 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:15.214 13:33:29 env -- common/autotest_common.sh@10 -- # set +x 00:22:15.214 ************************************ 00:22:15.214 START TEST env_mem_callbacks 00:22:15.214 ************************************ 00:22:15.214 13:33:29 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:22:15.214 EAL: Detected CPU lcores: 10 00:22:15.214 EAL: Detected NUMA nodes: 1 00:22:15.214 EAL: Detected shared linkage of DPDK 00:22:15.214 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:22:15.214 EAL: Selected IOVA mode 'PA' 00:22:15.472 00:22:15.472 00:22:15.472 CUnit - A unit testing framework for C - Version 2.1-3 00:22:15.472 http://cunit.sourceforge.net/ 00:22:15.472 00:22:15.472 00:22:15.472 Suite: memory 00:22:15.472 Test: test ... 00:22:15.472 register 0x200000200000 2097152 00:22:15.472 malloc 3145728 00:22:15.472 register 0x200000400000 4194304 00:22:15.472 buf 0x200000500000 len 3145728 PASSED 00:22:15.472 malloc 64 00:22:15.472 buf 0x2000004fff40 len 64 PASSED 00:22:15.472 malloc 4194304 00:22:15.472 register 0x200000800000 6291456 00:22:15.472 buf 0x200000a00000 len 4194304 PASSED 00:22:15.472 free 0x200000500000 3145728 00:22:15.472 free 0x2000004fff40 64 00:22:15.472 unregister 0x200000400000 4194304 PASSED 00:22:15.472 free 0x200000a00000 4194304 00:22:15.472 unregister 0x200000800000 6291456 PASSED 00:22:15.472 malloc 8388608 00:22:15.472 register 0x200000400000 10485760 00:22:15.472 buf 0x200000600000 len 8388608 PASSED 00:22:15.472 free 0x200000600000 8388608 00:22:15.472 unregister 0x200000400000 10485760 PASSED 00:22:15.472 passed 00:22:15.472 00:22:15.472 Run Summary: Type Total Ran Passed Failed Inactive 00:22:15.472 suites 1 1 n/a 0 0 00:22:15.472 tests 1 1 1 0 0 00:22:15.472 asserts 15 15 15 0 n/a 00:22:15.472 00:22:15.472 Elapsed time = 0.012 seconds 00:22:15.472 00:22:15.472 real 0m0.200s 00:22:15.472 user 0m0.035s 00:22:15.472 sys 0m0.064s 00:22:15.472 13:33:29 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:15.472 13:33:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:22:15.472 ************************************ 00:22:15.472 END TEST env_mem_callbacks 00:22:15.472 ************************************ 00:22:15.472 00:22:15.472 real 0m3.561s 00:22:15.472 user 0m1.721s 00:22:15.472 sys 0m1.460s 00:22:15.472 13:33:29 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:15.472 13:33:29 env -- common/autotest_common.sh@10 -- # set +x 00:22:15.472 ************************************ 00:22:15.472 END TEST env 00:22:15.472 ************************************ 00:22:15.472 13:33:29 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:22:15.472 13:33:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:15.472 13:33:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:15.472 13:33:29 -- common/autotest_common.sh@10 -- # set +x 00:22:15.472 ************************************ 00:22:15.472 START TEST rpc 00:22:15.472 ************************************ 00:22:15.472 13:33:29 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:22:15.472 * Looking for test storage... 00:22:15.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:15.730 13:33:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:15.730 13:33:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:15.730 13:33:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:15.730 13:33:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:22:15.730 13:33:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:22:15.730 13:33:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:22:15.730 13:33:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:22:15.730 13:33:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:22:15.730 13:33:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:22:15.730 13:33:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:22:15.730 13:33:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:15.730 13:33:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:22:15.730 13:33:29 rpc -- scripts/common.sh@345 -- # : 1 00:22:15.730 13:33:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:15.730 13:33:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:15.730 13:33:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:22:15.730 13:33:29 rpc -- scripts/common.sh@353 -- # local d=1 00:22:15.730 13:33:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:15.730 13:33:29 rpc -- scripts/common.sh@355 -- # echo 1 00:22:15.730 13:33:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:22:15.730 13:33:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:22:15.730 13:33:29 rpc -- scripts/common.sh@353 -- # local d=2 00:22:15.730 13:33:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:15.730 13:33:29 rpc -- scripts/common.sh@355 -- # echo 2 00:22:15.730 13:33:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:22:15.730 13:33:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:15.730 13:33:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:15.730 13:33:29 rpc -- scripts/common.sh@368 -- # return 0 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:15.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.730 --rc genhtml_branch_coverage=1 00:22:15.730 --rc genhtml_function_coverage=1 00:22:15.730 --rc genhtml_legend=1 00:22:15.730 --rc geninfo_all_blocks=1 00:22:15.730 --rc geninfo_unexecuted_blocks=1 00:22:15.730 00:22:15.730 ' 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:15.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.730 --rc genhtml_branch_coverage=1 00:22:15.730 --rc genhtml_function_coverage=1 00:22:15.730 --rc genhtml_legend=1 00:22:15.730 --rc geninfo_all_blocks=1 00:22:15.730 --rc geninfo_unexecuted_blocks=1 00:22:15.730 00:22:15.730 ' 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:15.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.730 --rc genhtml_branch_coverage=1 00:22:15.730 --rc genhtml_function_coverage=1 00:22:15.730 --rc genhtml_legend=1 00:22:15.730 --rc geninfo_all_blocks=1 00:22:15.730 --rc geninfo_unexecuted_blocks=1 00:22:15.730 00:22:15.730 ' 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:15.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.730 --rc genhtml_branch_coverage=1 00:22:15.730 --rc genhtml_function_coverage=1 00:22:15.730 --rc genhtml_legend=1 00:22:15.730 --rc geninfo_all_blocks=1 00:22:15.730 --rc geninfo_unexecuted_blocks=1 00:22:15.730 00:22:15.730 ' 00:22:15.730 13:33:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70508 00:22:15.730 13:33:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:22:15.730 13:33:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:22:15.730 13:33:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70508 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@831 -- # '[' -z 70508 ']' 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:15.730 13:33:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:22:15.988 [2024-10-28 13:33:29.890577] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:22:15.988 [2024-10-28 13:33:29.890780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70508 ] 00:22:15.988 [2024-10-28 13:33:30.044068] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:15.988 [2024-10-28 13:33:30.078675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.988 [2024-10-28 13:33:30.140001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:22:15.988 [2024-10-28 13:33:30.140090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70508' to capture a snapshot of events at runtime. 00:22:15.988 [2024-10-28 13:33:30.140110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.988 [2024-10-28 13:33:30.140128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.988 [2024-10-28 13:33:30.140185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70508 for offline analysis/debug. 00:22:15.988 [2024-10-28 13:33:30.140872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.979 13:33:30 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:16.979 13:33:30 rpc -- common/autotest_common.sh@864 -- # return 0 00:22:16.979 13:33:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:22:16.979 13:33:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:22:16.979 13:33:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:22:16.979 13:33:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:22:16.979 13:33:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:16.979 13:33:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:16.979 13:33:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:22:16.979 ************************************ 00:22:16.979 START TEST rpc_integrity 00:22:16.979 ************************************ 00:22:16.979 13:33:30 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:22:16.979 13:33:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:16.979 13:33:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.979 13:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:16.979 13:33:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.979 13:33:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:22:16.979 13:33:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:22:16.979 13:33:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:22:16.979 13:33:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:22:16.979 13:33:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.979 13:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:16.979 13:33:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.979 13:33:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:22:16.979 13:33:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:22:16.979 13:33:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.979 13:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:16.979 13:33:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.979 13:33:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:22:16.979 { 00:22:16.979 "name": "Malloc0", 00:22:16.979 "aliases": [ 00:22:16.979 "23e8e97c-1b7f-43e0-916b-554c7b9218d0" 00:22:16.979 ], 00:22:16.979 "product_name": "Malloc disk", 00:22:16.979 "block_size": 512, 00:22:16.979 "num_blocks": 16384, 00:22:16.979 "uuid": "23e8e97c-1b7f-43e0-916b-554c7b9218d0", 00:22:16.979 "assigned_rate_limits": { 00:22:16.979 "rw_ios_per_sec": 0, 00:22:16.979 "rw_mbytes_per_sec": 0, 00:22:16.979 "r_mbytes_per_sec": 0, 00:22:16.979 "w_mbytes_per_sec": 0 00:22:16.979 }, 00:22:16.979 "claimed": false, 00:22:16.979 "zoned": false, 00:22:16.979 "supported_io_types": { 00:22:16.979 "read": true, 00:22:16.979 "write": true, 00:22:16.979 "unmap": true, 00:22:16.979 "flush": true, 00:22:16.979 "reset": true, 00:22:16.979 "nvme_admin": false, 00:22:16.979 "nvme_io": false, 00:22:16.979 "nvme_io_md": false, 00:22:16.979 "write_zeroes": true, 00:22:16.979 "zcopy": true, 00:22:16.979 "get_zone_info": false, 00:22:16.979 "zone_management": false, 00:22:16.979 "zone_append": false, 00:22:16.979 "compare": false, 00:22:16.979 "compare_and_write": false, 00:22:16.979 "abort": true, 00:22:16.979 "seek_hole": false, 00:22:16.979 "seek_data": false, 00:22:16.979 "copy": true, 00:22:16.979 "nvme_iov_md": false 00:22:16.979 }, 00:22:16.979 "memory_domains": [ 00:22:16.979 { 00:22:16.979 "dma_device_id": "system", 00:22:16.979 "dma_device_type": 1 00:22:16.979 }, 00:22:16.979 { 00:22:16.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.979 "dma_device_type": 2 00:22:16.979 } 00:22:16.979 ], 00:22:16.979 "driver_specific": {} 00:22:16.979 } 00:22:16.979 ]' 00:22:16.979 13:33:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:22:16.979 13:33:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:22:16.979 13:33:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:22:16.979 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.979 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:16.979 [2024-10-28 13:33:31.056094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:22:16.979 [2024-10-28 13:33:31.056211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.979 [2024-10-28 13:33:31.056291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:16.979 [2024-10-28 13:33:31.056319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.979 [2024-10-28 13:33:31.059573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.979 [2024-10-28 13:33:31.059746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:22:16.979 Passthru0 00:22:16.979 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.979 13:33:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:22:16.979 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.980 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:16.980 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.980 13:33:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:22:16.980 { 00:22:16.980 "name": "Malloc0", 00:22:16.980 "aliases": [ 00:22:16.980 "23e8e97c-1b7f-43e0-916b-554c7b9218d0" 00:22:16.980 ], 00:22:16.980 "product_name": "Malloc disk", 00:22:16.980 "block_size": 512, 00:22:16.980 "num_blocks": 16384, 00:22:16.980 "uuid": "23e8e97c-1b7f-43e0-916b-554c7b9218d0", 00:22:16.980 "assigned_rate_limits": { 00:22:16.980 "rw_ios_per_sec": 0, 00:22:16.980 "rw_mbytes_per_sec": 0, 00:22:16.980 "r_mbytes_per_sec": 0, 00:22:16.980 "w_mbytes_per_sec": 0 00:22:16.980 }, 00:22:16.980 "claimed": true, 00:22:16.980 "claim_type": "exclusive_write", 00:22:16.980 "zoned": false, 00:22:16.980 "supported_io_types": { 00:22:16.980 "read": true, 00:22:16.980 "write": true, 00:22:16.980 "unmap": true, 00:22:16.980 "flush": true, 00:22:16.980 "reset": true, 00:22:16.980 "nvme_admin": false, 00:22:16.980 "nvme_io": false, 00:22:16.980 "nvme_io_md": false, 00:22:16.980 "write_zeroes": true, 00:22:16.980 "zcopy": true, 00:22:16.980 "get_zone_info": false, 00:22:16.980 "zone_management": false, 00:22:16.980 "zone_append": false, 00:22:16.980 "compare": false, 00:22:16.980 "compare_and_write": false, 00:22:16.980 "abort": true, 00:22:16.980 "seek_hole": false, 00:22:16.980 "seek_data": false, 00:22:16.980 "copy": true, 00:22:16.980 "nvme_iov_md": false 00:22:16.980 }, 00:22:16.980 "memory_domains": [ 00:22:16.980 { 00:22:16.980 "dma_device_id": "system", 00:22:16.980 "dma_device_type": 1 00:22:16.980 }, 00:22:16.980 { 00:22:16.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.980 "dma_device_type": 2 00:22:16.980 } 00:22:16.980 ], 00:22:16.980 "driver_specific": {} 00:22:16.980 }, 00:22:16.980 { 00:22:16.980 "name": "Passthru0", 00:22:16.980 "aliases": [ 00:22:16.980 "c6b61975-cbf1-592e-8a60-22b968fe1174" 00:22:16.980 ], 00:22:16.980 "product_name": "passthru", 00:22:16.980 "block_size": 512, 00:22:16.980 "num_blocks": 16384, 00:22:16.980 "uuid": "c6b61975-cbf1-592e-8a60-22b968fe1174", 00:22:16.980 "assigned_rate_limits": { 00:22:16.980 "rw_ios_per_sec": 0, 00:22:16.980 "rw_mbytes_per_sec": 0, 00:22:16.980 "r_mbytes_per_sec": 0, 00:22:16.980 "w_mbytes_per_sec": 0 00:22:16.980 }, 00:22:16.980 "claimed": false, 00:22:16.980 "zoned": false, 00:22:16.980 "supported_io_types": { 00:22:16.980 "read": true, 00:22:16.980 "write": true, 00:22:16.980 "unmap": true, 00:22:16.980 "flush": true, 00:22:16.980 "reset": true, 00:22:16.980 "nvme_admin": false, 00:22:16.980 "nvme_io": false, 00:22:16.980 "nvme_io_md": false, 00:22:16.980 "write_zeroes": true, 00:22:16.980 "zcopy": true, 00:22:16.980 "get_zone_info": false, 00:22:16.980 "zone_management": false, 00:22:16.980 "zone_append": false, 00:22:16.980 "compare": false, 00:22:16.980 "compare_and_write": false, 00:22:16.980 "abort": true, 00:22:16.980 "seek_hole": false, 00:22:16.980 "seek_data": false, 00:22:16.980 "copy": true, 00:22:16.980 "nvme_iov_md": false 00:22:16.980 }, 00:22:16.980 "memory_domains": [ 00:22:16.980 { 00:22:16.980 "dma_device_id": "system", 00:22:16.980 "dma_device_type": 1 00:22:16.980 }, 00:22:16.980 { 00:22:16.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.980 "dma_device_type": 2 00:22:16.980 } 00:22:16.980 ], 00:22:16.980 "driver_specific": { 00:22:16.980 "passthru": { 00:22:16.980 "name": "Passthru0", 00:22:16.980 "base_bdev_name": "Malloc0" 00:22:16.980 } 00:22:16.980 } 00:22:16.980 } 00:22:16.980 ]' 00:22:16.980 13:33:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:22:17.239 13:33:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:22:17.239 13:33:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:22:17.239 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.239 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:17.239 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.239 13:33:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:17.239 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.239 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:17.239 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.239 13:33:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:17.239 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.239 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:17.239 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.239 13:33:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:22:17.239 13:33:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:22:17.239 ************************************ 00:22:17.239 END TEST rpc_integrity 00:22:17.239 ************************************ 00:22:17.239 13:33:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:22:17.239 00:22:17.239 real 0m0.340s 00:22:17.239 user 0m0.218s 00:22:17.239 sys 0m0.051s 00:22:17.239 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.239 13:33:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:17.239 13:33:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:22:17.239 13:33:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:17.239 13:33:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:17.239 13:33:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:22:17.239 ************************************ 00:22:17.239 START TEST rpc_plugins 00:22:17.239 ************************************ 00:22:17.239 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:22:17.239 13:33:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:22:17.239 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.239 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:22:17.239 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.239 13:33:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:22:17.239 13:33:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:22:17.239 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.239 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:22:17.239 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.239 13:33:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:22:17.239 { 00:22:17.239 "name": "Malloc1", 00:22:17.239 "aliases": [ 00:22:17.239 "13dbbd16-13de-4096-a4ef-779dc7bd19a6" 00:22:17.239 ], 00:22:17.239 "product_name": "Malloc disk", 00:22:17.239 "block_size": 4096, 00:22:17.239 "num_blocks": 256, 00:22:17.239 "uuid": "13dbbd16-13de-4096-a4ef-779dc7bd19a6", 00:22:17.239 "assigned_rate_limits": { 00:22:17.239 "rw_ios_per_sec": 0, 00:22:17.239 "rw_mbytes_per_sec": 0, 00:22:17.239 "r_mbytes_per_sec": 0, 00:22:17.239 "w_mbytes_per_sec": 0 00:22:17.239 }, 00:22:17.239 "claimed": false, 00:22:17.239 "zoned": false, 00:22:17.239 "supported_io_types": { 00:22:17.239 "read": true, 00:22:17.239 "write": true, 00:22:17.239 "unmap": true, 00:22:17.239 "flush": true, 00:22:17.239 "reset": true, 00:22:17.239 "nvme_admin": false, 00:22:17.239 "nvme_io": false, 00:22:17.239 "nvme_io_md": false, 00:22:17.239 "write_zeroes": true, 00:22:17.239 "zcopy": true, 00:22:17.239 "get_zone_info": false, 00:22:17.239 "zone_management": false, 00:22:17.239 "zone_append": false, 00:22:17.239 "compare": false, 00:22:17.239 "compare_and_write": false, 00:22:17.239 "abort": true, 00:22:17.239 "seek_hole": false, 00:22:17.239 "seek_data": false, 00:22:17.239 "copy": true, 00:22:17.239 "nvme_iov_md": false 00:22:17.239 }, 00:22:17.239 "memory_domains": [ 00:22:17.239 { 00:22:17.239 "dma_device_id": "system", 00:22:17.239 "dma_device_type": 1 00:22:17.239 }, 00:22:17.239 { 00:22:17.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.239 "dma_device_type": 2 00:22:17.239 } 00:22:17.239 ], 00:22:17.239 "driver_specific": {} 00:22:17.239 } 00:22:17.239 ]' 00:22:17.239 13:33:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:22:17.239 13:33:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:22:17.239 13:33:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:22:17.239 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.239 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:22:17.239 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.239 13:33:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:22:17.239 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.239 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:22:17.498 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.498 13:33:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:22:17.498 13:33:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:22:17.498 ************************************ 00:22:17.498 END TEST rpc_plugins 00:22:17.498 ************************************ 00:22:17.498 13:33:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:22:17.498 00:22:17.498 real 0m0.163s 00:22:17.498 user 0m0.106s 00:22:17.498 sys 0m0.016s 00:22:17.498 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.498 13:33:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:22:17.498 13:33:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:22:17.498 13:33:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:17.498 13:33:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:17.498 13:33:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:22:17.498 ************************************ 00:22:17.498 START TEST rpc_trace_cmd_test 00:22:17.498 ************************************ 00:22:17.498 13:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:22:17.498 13:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:22:17.498 13:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:22:17.498 13:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.498 13:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.498 13:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.498 13:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:22:17.498 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70508", 00:22:17.498 "tpoint_group_mask": "0x8", 00:22:17.498 "iscsi_conn": { 00:22:17.498 "mask": "0x2", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "scsi": { 00:22:17.498 "mask": "0x4", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "bdev": { 00:22:17.498 "mask": "0x8", 00:22:17.498 "tpoint_mask": "0xffffffffffffffff" 00:22:17.498 }, 00:22:17.498 "nvmf_rdma": { 00:22:17.498 "mask": "0x10", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "nvmf_tcp": { 00:22:17.498 "mask": "0x20", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "ftl": { 00:22:17.498 "mask": "0x40", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "blobfs": { 00:22:17.498 "mask": "0x80", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "dsa": { 00:22:17.498 "mask": "0x200", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "thread": { 00:22:17.498 "mask": "0x400", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "nvme_pcie": { 00:22:17.498 "mask": "0x800", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "iaa": { 00:22:17.498 "mask": "0x1000", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "nvme_tcp": { 00:22:17.498 "mask": "0x2000", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "bdev_nvme": { 00:22:17.498 "mask": "0x4000", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "sock": { 00:22:17.498 "mask": "0x8000", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "blob": { 00:22:17.498 "mask": "0x10000", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "bdev_raid": { 00:22:17.498 "mask": "0x20000", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.498 }, 00:22:17.498 "scheduler": { 00:22:17.498 "mask": "0x40000", 00:22:17.498 "tpoint_mask": "0x0" 00:22:17.499 } 00:22:17.499 }' 00:22:17.499 13:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:22:17.499 13:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:22:17.499 13:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:22:17.499 13:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:22:17.499 13:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:22:17.757 13:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:22:17.757 13:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:22:17.757 13:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:22:17.757 13:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:22:17.757 ************************************ 00:22:17.757 END TEST rpc_trace_cmd_test 00:22:17.757 ************************************ 00:22:17.757 13:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:22:17.757 00:22:17.757 real 0m0.285s 00:22:17.758 user 0m0.248s 00:22:17.758 sys 0m0.027s 00:22:17.758 13:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.758 13:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.758 13:33:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:22:17.758 13:33:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:22:17.758 13:33:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:22:17.758 13:33:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:17.758 13:33:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:17.758 13:33:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:22:17.758 ************************************ 00:22:17.758 START TEST rpc_daemon_integrity 00:22:17.758 ************************************ 00:22:17.758 13:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:22:17.758 13:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:22:17.758 13:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.758 13:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:17.758 13:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.758 13:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:22:17.758 13:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:22:18.017 13:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:22:18.017 13:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:22:18.017 13:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.017 13:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.017 13:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.017 13:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:22:18.017 13:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:22:18.017 13:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.017 13:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.017 13:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.017 13:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:22:18.017 { 00:22:18.017 "name": "Malloc2", 00:22:18.017 "aliases": [ 00:22:18.017 "fdd3e75d-30a2-4409-ba83-90e8934487d2" 00:22:18.017 ], 00:22:18.017 "product_name": "Malloc disk", 00:22:18.017 "block_size": 512, 00:22:18.017 "num_blocks": 16384, 00:22:18.017 "uuid": "fdd3e75d-30a2-4409-ba83-90e8934487d2", 00:22:18.017 "assigned_rate_limits": { 00:22:18.017 "rw_ios_per_sec": 0, 00:22:18.017 "rw_mbytes_per_sec": 0, 00:22:18.017 "r_mbytes_per_sec": 0, 00:22:18.017 "w_mbytes_per_sec": 0 00:22:18.018 }, 00:22:18.018 "claimed": false, 00:22:18.018 "zoned": false, 00:22:18.018 "supported_io_types": { 00:22:18.018 "read": true, 00:22:18.018 "write": true, 00:22:18.018 "unmap": true, 00:22:18.018 "flush": true, 00:22:18.018 "reset": true, 00:22:18.018 "nvme_admin": false, 00:22:18.018 "nvme_io": false, 00:22:18.018 "nvme_io_md": false, 00:22:18.018 "write_zeroes": true, 00:22:18.018 "zcopy": true, 00:22:18.018 "get_zone_info": false, 00:22:18.018 "zone_management": false, 00:22:18.018 "zone_append": false, 00:22:18.018 "compare": false, 00:22:18.018 "compare_and_write": false, 00:22:18.018 "abort": true, 00:22:18.018 "seek_hole": false, 00:22:18.018 "seek_data": false, 00:22:18.018 "copy": true, 00:22:18.018 "nvme_iov_md": false 00:22:18.018 }, 00:22:18.018 "memory_domains": [ 00:22:18.018 { 00:22:18.018 "dma_device_id": "system", 00:22:18.018 "dma_device_type": 1 00:22:18.018 }, 00:22:18.018 { 00:22:18.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.018 "dma_device_type": 2 00:22:18.018 } 00:22:18.018 ], 00:22:18.018 "driver_specific": {} 00:22:18.018 } 00:22:18.018 ]' 00:22:18.018 13:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:22:18.018 13:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:22:18.018 13:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:22:18.018 13:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.018 13:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.018 [2024-10-28 13:33:32.003390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:22:18.018 [2024-10-28 13:33:32.003473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.018 [2024-10-28 13:33:32.003507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:18.018 [2024-10-28 13:33:32.003525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.018 [2024-10-28 13:33:32.006852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.018 [2024-10-28 13:33:32.006906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:22:18.018 Passthru0 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:22:18.018 { 00:22:18.018 "name": "Malloc2", 00:22:18.018 "aliases": [ 00:22:18.018 "fdd3e75d-30a2-4409-ba83-90e8934487d2" 00:22:18.018 ], 00:22:18.018 "product_name": "Malloc disk", 00:22:18.018 "block_size": 512, 00:22:18.018 "num_blocks": 16384, 00:22:18.018 "uuid": "fdd3e75d-30a2-4409-ba83-90e8934487d2", 00:22:18.018 "assigned_rate_limits": { 00:22:18.018 "rw_ios_per_sec": 0, 00:22:18.018 "rw_mbytes_per_sec": 0, 00:22:18.018 "r_mbytes_per_sec": 0, 00:22:18.018 "w_mbytes_per_sec": 0 00:22:18.018 }, 00:22:18.018 "claimed": true, 00:22:18.018 "claim_type": "exclusive_write", 00:22:18.018 "zoned": false, 00:22:18.018 "supported_io_types": { 00:22:18.018 "read": true, 00:22:18.018 "write": true, 00:22:18.018 "unmap": true, 00:22:18.018 "flush": true, 00:22:18.018 "reset": true, 00:22:18.018 "nvme_admin": false, 00:22:18.018 "nvme_io": false, 00:22:18.018 "nvme_io_md": false, 00:22:18.018 "write_zeroes": true, 00:22:18.018 "zcopy": true, 00:22:18.018 "get_zone_info": false, 00:22:18.018 "zone_management": false, 00:22:18.018 "zone_append": false, 00:22:18.018 "compare": false, 00:22:18.018 "compare_and_write": false, 00:22:18.018 "abort": true, 00:22:18.018 "seek_hole": false, 00:22:18.018 "seek_data": false, 00:22:18.018 "copy": true, 00:22:18.018 "nvme_iov_md": false 00:22:18.018 }, 00:22:18.018 "memory_domains": [ 00:22:18.018 { 00:22:18.018 "dma_device_id": "system", 00:22:18.018 "dma_device_type": 1 00:22:18.018 }, 00:22:18.018 { 00:22:18.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.018 "dma_device_type": 2 00:22:18.018 } 00:22:18.018 ], 00:22:18.018 "driver_specific": {} 00:22:18.018 }, 00:22:18.018 { 00:22:18.018 "name": "Passthru0", 00:22:18.018 "aliases": [ 00:22:18.018 "0f6fb598-a1c8-5fa2-828d-ae4c055c46f5" 00:22:18.018 ], 00:22:18.018 "product_name": "passthru", 00:22:18.018 "block_size": 512, 00:22:18.018 "num_blocks": 16384, 00:22:18.018 "uuid": "0f6fb598-a1c8-5fa2-828d-ae4c055c46f5", 00:22:18.018 "assigned_rate_limits": { 00:22:18.018 "rw_ios_per_sec": 0, 00:22:18.018 "rw_mbytes_per_sec": 0, 00:22:18.018 "r_mbytes_per_sec": 0, 00:22:18.018 "w_mbytes_per_sec": 0 00:22:18.018 }, 00:22:18.018 "claimed": false, 00:22:18.018 "zoned": false, 00:22:18.018 "supported_io_types": { 00:22:18.018 "read": true, 00:22:18.018 "write": true, 00:22:18.018 "unmap": true, 00:22:18.018 "flush": true, 00:22:18.018 "reset": true, 00:22:18.018 "nvme_admin": false, 00:22:18.018 "nvme_io": false, 00:22:18.018 "nvme_io_md": false, 00:22:18.018 "write_zeroes": true, 00:22:18.018 "zcopy": true, 00:22:18.018 "get_zone_info": false, 00:22:18.018 "zone_management": false, 00:22:18.018 "zone_append": false, 00:22:18.018 "compare": false, 00:22:18.018 "compare_and_write": false, 00:22:18.018 "abort": true, 00:22:18.018 "seek_hole": false, 00:22:18.018 "seek_data": false, 00:22:18.018 "copy": true, 00:22:18.018 "nvme_iov_md": false 00:22:18.018 }, 00:22:18.018 "memory_domains": [ 00:22:18.018 { 00:22:18.018 "dma_device_id": "system", 00:22:18.018 "dma_device_type": 1 00:22:18.018 }, 00:22:18.018 { 00:22:18.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.018 "dma_device_type": 2 00:22:18.018 } 00:22:18.018 ], 00:22:18.018 "driver_specific": { 00:22:18.018 "passthru": { 00:22:18.018 "name": "Passthru0", 00:22:18.018 "base_bdev_name": "Malloc2" 00:22:18.018 } 00:22:18.018 } 00:22:18.018 } 00:22:18.018 ]' 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:22:18.018 13:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:22:18.308 ************************************ 00:22:18.308 END TEST rpc_daemon_integrity 00:22:18.308 ************************************ 00:22:18.308 13:33:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:22:18.308 00:22:18.308 real 0m0.347s 00:22:18.308 user 0m0.237s 00:22:18.308 sys 0m0.043s 00:22:18.308 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:18.308 13:33:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:22:18.308 13:33:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:18.308 13:33:32 rpc -- rpc/rpc.sh@84 -- # killprocess 70508 00:22:18.308 13:33:32 rpc -- common/autotest_common.sh@950 -- # '[' -z 70508 ']' 00:22:18.308 13:33:32 rpc -- common/autotest_common.sh@954 -- # kill -0 70508 00:22:18.308 13:33:32 rpc -- common/autotest_common.sh@955 -- # uname 00:22:18.308 13:33:32 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:18.308 13:33:32 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70508 00:22:18.308 killing process with pid 70508 00:22:18.308 13:33:32 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:18.308 13:33:32 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:18.308 13:33:32 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70508' 00:22:18.308 13:33:32 rpc -- common/autotest_common.sh@969 -- # kill 70508 00:22:18.308 13:33:32 rpc -- common/autotest_common.sh@974 -- # wait 70508 00:22:18.568 ************************************ 00:22:18.568 END TEST rpc 00:22:18.568 ************************************ 00:22:18.568 00:22:18.568 real 0m3.168s 00:22:18.568 user 0m3.972s 00:22:18.568 sys 0m0.873s 00:22:18.568 13:33:32 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:18.568 13:33:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:22:18.827 13:33:32 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:22:18.827 13:33:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:18.827 13:33:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:18.827 13:33:32 -- common/autotest_common.sh@10 -- # set +x 00:22:18.827 ************************************ 00:22:18.827 START TEST skip_rpc 00:22:18.827 ************************************ 00:22:18.827 13:33:32 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:22:18.827 * Looking for test storage... 00:22:18.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:22:18.827 13:33:32 skip_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:18.827 13:33:32 skip_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:22:18.827 13:33:32 skip_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:18.827 13:33:32 skip_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:18.827 13:33:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:22:18.827 13:33:32 skip_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:18.827 13:33:32 skip_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:18.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.827 --rc genhtml_branch_coverage=1 00:22:18.827 --rc genhtml_function_coverage=1 00:22:18.827 --rc genhtml_legend=1 00:22:18.827 --rc geninfo_all_blocks=1 00:22:18.827 --rc geninfo_unexecuted_blocks=1 00:22:18.827 00:22:18.827 ' 00:22:18.827 13:33:32 skip_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:18.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.827 --rc genhtml_branch_coverage=1 00:22:18.827 --rc genhtml_function_coverage=1 00:22:18.827 --rc genhtml_legend=1 00:22:18.827 --rc geninfo_all_blocks=1 00:22:18.827 --rc geninfo_unexecuted_blocks=1 00:22:18.827 00:22:18.827 ' 00:22:18.827 13:33:32 skip_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:18.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.827 --rc genhtml_branch_coverage=1 00:22:18.827 --rc genhtml_function_coverage=1 00:22:18.827 --rc genhtml_legend=1 00:22:18.827 --rc geninfo_all_blocks=1 00:22:18.827 --rc geninfo_unexecuted_blocks=1 00:22:18.827 00:22:18.827 ' 00:22:18.827 13:33:32 skip_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:18.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.827 --rc genhtml_branch_coverage=1 00:22:18.827 --rc genhtml_function_coverage=1 00:22:18.827 --rc genhtml_legend=1 00:22:18.827 --rc geninfo_all_blocks=1 00:22:18.827 --rc geninfo_unexecuted_blocks=1 00:22:18.827 00:22:18.827 ' 00:22:18.827 13:33:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:22:18.827 13:33:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:22:18.827 13:33:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:22:18.827 13:33:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:18.827 13:33:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:18.827 13:33:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:18.827 ************************************ 00:22:18.827 START TEST skip_rpc 00:22:18.827 ************************************ 00:22:18.827 13:33:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:22:18.827 13:33:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70721 00:22:18.827 13:33:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:22:18.827 13:33:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:22:18.827 13:33:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:22:19.086 [2024-10-28 13:33:33.108957] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:22:19.086 [2024-10-28 13:33:33.109452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70721 ] 00:22:19.345 [2024-10-28 13:33:33.264787] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:19.345 [2024-10-28 13:33:33.298846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.345 [2024-10-28 13:33:33.352642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70721 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70721 ']' 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70721 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:22:24.611 13:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:24.611 13:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70721 00:22:24.611 killing process with pid 70721 00:22:24.611 13:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:24.611 13:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:24.611 13:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70721' 00:22:24.611 13:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70721 00:22:24.611 13:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70721 00:22:24.611 ************************************ 00:22:24.611 END TEST skip_rpc 00:22:24.611 ************************************ 00:22:24.611 00:22:24.611 real 0m5.486s 00:22:24.611 user 0m4.998s 00:22:24.611 sys 0m0.383s 00:22:24.611 13:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:24.611 13:33:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:24.611 13:33:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:22:24.611 13:33:38 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:24.611 13:33:38 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:24.611 13:33:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:24.611 ************************************ 00:22:24.611 START TEST skip_rpc_with_json 00:22:24.611 ************************************ 00:22:24.611 13:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:22:24.611 13:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:22:24.611 13:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70803 00:22:24.611 13:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:24.611 13:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:22:24.611 13:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70803 00:22:24.611 13:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70803 ']' 00:22:24.611 13:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.611 13:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:24.611 13:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.611 13:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:24.611 13:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:24.611 [2024-10-28 13:33:38.658712] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:22:24.611 [2024-10-28 13:33:38.658884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70803 ] 00:22:24.870 [2024-10-28 13:33:38.803556] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:24.870 [2024-10-28 13:33:38.845763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.870 [2024-10-28 13:33:38.902181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:25.807 [2024-10-28 13:33:39.651980] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:22:25.807 request: 00:22:25.807 { 00:22:25.807 "trtype": "tcp", 00:22:25.807 "method": "nvmf_get_transports", 00:22:25.807 "req_id": 1 00:22:25.807 } 00:22:25.807 Got JSON-RPC error response 00:22:25.807 response: 00:22:25.807 { 00:22:25.807 "code": -19, 00:22:25.807 "message": "No such device" 00:22:25.807 } 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:25.807 [2024-10-28 13:33:39.664087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.807 13:33:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:22:25.807 { 00:22:25.807 "subsystems": [ 00:22:25.807 { 00:22:25.807 "subsystem": "fsdev", 00:22:25.807 "config": [ 00:22:25.807 { 00:22:25.807 "method": "fsdev_set_opts", 00:22:25.807 "params": { 00:22:25.807 "fsdev_io_pool_size": 65535, 00:22:25.807 "fsdev_io_cache_size": 256 00:22:25.807 } 00:22:25.807 } 00:22:25.807 ] 00:22:25.807 }, 00:22:25.807 { 00:22:25.807 "subsystem": "keyring", 00:22:25.807 "config": [] 00:22:25.807 }, 00:22:25.807 { 00:22:25.807 "subsystem": "iobuf", 00:22:25.807 "config": [ 00:22:25.807 { 00:22:25.807 "method": "iobuf_set_options", 00:22:25.807 "params": { 00:22:25.807 "small_pool_count": 8192, 00:22:25.807 "large_pool_count": 1024, 00:22:25.807 "small_bufsize": 8192, 00:22:25.807 "large_bufsize": 135168, 00:22:25.807 "enable_numa": false 00:22:25.807 } 00:22:25.807 } 00:22:25.807 ] 00:22:25.807 }, 00:22:25.807 { 00:22:25.807 "subsystem": "sock", 00:22:25.807 "config": [ 00:22:25.807 { 00:22:25.807 "method": "sock_set_default_impl", 00:22:25.807 "params": { 00:22:25.807 "impl_name": "posix" 00:22:25.807 } 00:22:25.807 }, 00:22:25.807 { 00:22:25.807 "method": "sock_impl_set_options", 00:22:25.807 "params": { 00:22:25.807 "impl_name": "ssl", 00:22:25.807 "recv_buf_size": 4096, 00:22:25.807 "send_buf_size": 4096, 00:22:25.807 "enable_recv_pipe": true, 00:22:25.807 "enable_quickack": false, 00:22:25.807 "enable_placement_id": 0, 00:22:25.807 "enable_zerocopy_send_server": true, 00:22:25.807 "enable_zerocopy_send_client": false, 00:22:25.807 "zerocopy_threshold": 0, 00:22:25.807 "tls_version": 0, 00:22:25.807 "enable_ktls": false 00:22:25.807 } 00:22:25.807 }, 00:22:25.807 { 00:22:25.807 "method": "sock_impl_set_options", 00:22:25.807 "params": { 00:22:25.807 "impl_name": "posix", 00:22:25.807 "recv_buf_size": 2097152, 00:22:25.807 "send_buf_size": 2097152, 00:22:25.807 "enable_recv_pipe": true, 00:22:25.807 "enable_quickack": false, 00:22:25.807 "enable_placement_id": 0, 00:22:25.807 "enable_zerocopy_send_server": true, 00:22:25.807 "enable_zerocopy_send_client": false, 00:22:25.807 "zerocopy_threshold": 0, 00:22:25.807 "tls_version": 0, 00:22:25.807 "enable_ktls": false 00:22:25.807 } 00:22:25.807 } 00:22:25.807 ] 00:22:25.807 }, 00:22:25.807 { 00:22:25.807 "subsystem": "vmd", 00:22:25.807 "config": [] 00:22:25.807 }, 00:22:25.807 { 00:22:25.807 "subsystem": "accel", 00:22:25.807 "config": [ 00:22:25.807 { 00:22:25.807 "method": "accel_set_options", 00:22:25.807 "params": { 00:22:25.807 "small_cache_size": 128, 00:22:25.807 "large_cache_size": 16, 00:22:25.807 "task_count": 2048, 00:22:25.807 "sequence_count": 2048, 00:22:25.807 "buf_count": 2048 00:22:25.807 } 00:22:25.807 } 00:22:25.807 ] 00:22:25.807 }, 00:22:25.807 { 00:22:25.807 "subsystem": "bdev", 00:22:25.807 "config": [ 00:22:25.807 { 00:22:25.807 "method": "bdev_set_options", 00:22:25.807 "params": { 00:22:25.807 "bdev_io_pool_size": 65535, 00:22:25.807 "bdev_io_cache_size": 256, 00:22:25.807 "bdev_auto_examine": true, 00:22:25.808 "iobuf_small_cache_size": 128, 00:22:25.808 "iobuf_large_cache_size": 16 00:22:25.808 } 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "method": "bdev_raid_set_options", 00:22:25.808 "params": { 00:22:25.808 "process_window_size_kb": 1024, 00:22:25.808 "process_max_bandwidth_mb_sec": 0 00:22:25.808 } 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "method": "bdev_iscsi_set_options", 00:22:25.808 "params": { 00:22:25.808 "timeout_sec": 30 00:22:25.808 } 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "method": "bdev_nvme_set_options", 00:22:25.808 "params": { 00:22:25.808 "action_on_timeout": "none", 00:22:25.808 "timeout_us": 0, 00:22:25.808 "timeout_admin_us": 0, 00:22:25.808 "keep_alive_timeout_ms": 10000, 00:22:25.808 "arbitration_burst": 0, 00:22:25.808 "low_priority_weight": 0, 00:22:25.808 "medium_priority_weight": 0, 00:22:25.808 "high_priority_weight": 0, 00:22:25.808 "nvme_adminq_poll_period_us": 10000, 00:22:25.808 "nvme_ioq_poll_period_us": 0, 00:22:25.808 "io_queue_requests": 0, 00:22:25.808 "delay_cmd_submit": true, 00:22:25.808 "transport_retry_count": 4, 00:22:25.808 "bdev_retry_count": 3, 00:22:25.808 "transport_ack_timeout": 0, 00:22:25.808 "ctrlr_loss_timeout_sec": 0, 00:22:25.808 "reconnect_delay_sec": 0, 00:22:25.808 "fast_io_fail_timeout_sec": 0, 00:22:25.808 "disable_auto_failback": false, 00:22:25.808 "generate_uuids": false, 00:22:25.808 "transport_tos": 0, 00:22:25.808 "nvme_error_stat": false, 00:22:25.808 "rdma_srq_size": 0, 00:22:25.808 "io_path_stat": false, 00:22:25.808 "allow_accel_sequence": false, 00:22:25.808 "rdma_max_cq_size": 0, 00:22:25.808 "rdma_cm_event_timeout_ms": 0, 00:22:25.808 "dhchap_digests": [ 00:22:25.808 "sha256", 00:22:25.808 "sha384", 00:22:25.808 "sha512" 00:22:25.808 ], 00:22:25.808 "dhchap_dhgroups": [ 00:22:25.808 "null", 00:22:25.808 "ffdhe2048", 00:22:25.808 "ffdhe3072", 00:22:25.808 "ffdhe4096", 00:22:25.808 "ffdhe6144", 00:22:25.808 "ffdhe8192" 00:22:25.808 ] 00:22:25.808 } 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "method": "bdev_nvme_set_hotplug", 00:22:25.808 "params": { 00:22:25.808 "period_us": 100000, 00:22:25.808 "enable": false 00:22:25.808 } 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "method": "bdev_wait_for_examine" 00:22:25.808 } 00:22:25.808 ] 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "subsystem": "scsi", 00:22:25.808 "config": null 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "subsystem": "scheduler", 00:22:25.808 "config": [ 00:22:25.808 { 00:22:25.808 "method": "framework_set_scheduler", 00:22:25.808 "params": { 00:22:25.808 "name": "static" 00:22:25.808 } 00:22:25.808 } 00:22:25.808 ] 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "subsystem": "vhost_scsi", 00:22:25.808 "config": [] 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "subsystem": "vhost_blk", 00:22:25.808 "config": [] 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "subsystem": "ublk", 00:22:25.808 "config": [] 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "subsystem": "nbd", 00:22:25.808 "config": [] 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "subsystem": "nvmf", 00:22:25.808 "config": [ 00:22:25.808 { 00:22:25.808 "method": "nvmf_set_config", 00:22:25.808 "params": { 00:22:25.808 "discovery_filter": "match_any", 00:22:25.808 "admin_cmd_passthru": { 00:22:25.808 "identify_ctrlr": false 00:22:25.808 }, 00:22:25.808 "dhchap_digests": [ 00:22:25.808 "sha256", 00:22:25.808 "sha384", 00:22:25.808 "sha512" 00:22:25.808 ], 00:22:25.808 "dhchap_dhgroups": [ 00:22:25.808 "null", 00:22:25.808 "ffdhe2048", 00:22:25.808 "ffdhe3072", 00:22:25.808 "ffdhe4096", 00:22:25.808 "ffdhe6144", 00:22:25.808 "ffdhe8192" 00:22:25.808 ] 00:22:25.808 } 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "method": "nvmf_set_max_subsystems", 00:22:25.808 "params": { 00:22:25.808 "max_subsystems": 1024 00:22:25.808 } 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "method": "nvmf_set_crdt", 00:22:25.808 "params": { 00:22:25.808 "crdt1": 0, 00:22:25.808 "crdt2": 0, 00:22:25.808 "crdt3": 0 00:22:25.808 } 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "method": "nvmf_create_transport", 00:22:25.808 "params": { 00:22:25.808 "trtype": "TCP", 00:22:25.808 "max_queue_depth": 128, 00:22:25.808 "max_io_qpairs_per_ctrlr": 127, 00:22:25.808 "in_capsule_data_size": 4096, 00:22:25.808 "max_io_size": 131072, 00:22:25.808 "io_unit_size": 131072, 00:22:25.808 "max_aq_depth": 128, 00:22:25.808 "num_shared_buffers": 511, 00:22:25.808 "buf_cache_size": 4294967295, 00:22:25.808 "dif_insert_or_strip": false, 00:22:25.808 "zcopy": false, 00:22:25.808 "c2h_success": true, 00:22:25.808 "sock_priority": 0, 00:22:25.808 "abort_timeout_sec": 1, 00:22:25.808 "ack_timeout": 0, 00:22:25.808 "data_wr_pool_size": 0 00:22:25.808 } 00:22:25.808 } 00:22:25.808 ] 00:22:25.808 }, 00:22:25.808 { 00:22:25.808 "subsystem": "iscsi", 00:22:25.808 "config": [ 00:22:25.808 { 00:22:25.808 "method": "iscsi_set_options", 00:22:25.808 "params": { 00:22:25.808 "node_base": "iqn.2016-06.io.spdk", 00:22:25.808 "max_sessions": 128, 00:22:25.808 "max_connections_per_session": 2, 00:22:25.808 "max_queue_depth": 64, 00:22:25.808 "default_time2wait": 2, 00:22:25.808 "default_time2retain": 20, 00:22:25.808 "first_burst_length": 8192, 00:22:25.808 "immediate_data": true, 00:22:25.808 "allow_duplicated_isid": false, 00:22:25.808 "error_recovery_level": 0, 00:22:25.808 "nop_timeout": 60, 00:22:25.808 "nop_in_interval": 30, 00:22:25.808 "disable_chap": false, 00:22:25.808 "require_chap": false, 00:22:25.808 "mutual_chap": false, 00:22:25.808 "chap_group": 0, 00:22:25.808 "max_large_datain_per_connection": 64, 00:22:25.808 "max_r2t_per_connection": 4, 00:22:25.808 "pdu_pool_size": 36864, 00:22:25.808 "immediate_data_pool_size": 16384, 00:22:25.808 "data_out_pool_size": 2048 00:22:25.808 } 00:22:25.808 } 00:22:25.808 ] 00:22:25.808 } 00:22:25.808 ] 00:22:25.808 } 00:22:25.808 13:33:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:22:25.808 13:33:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70803 00:22:25.808 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70803 ']' 00:22:25.808 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70803 00:22:25.808 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:22:25.808 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:25.808 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70803 00:22:25.808 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:25.808 killing process with pid 70803 00:22:25.808 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:25.808 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70803' 00:22:25.808 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70803 00:22:25.808 13:33:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70803 00:22:26.374 13:33:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70837 00:22:26.374 13:33:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:22:26.374 13:33:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:22:31.636 13:33:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70837 00:22:31.636 13:33:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70837 ']' 00:22:31.636 13:33:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70837 00:22:31.636 13:33:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:22:31.636 13:33:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:31.636 13:33:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70837 00:22:31.636 13:33:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:31.636 killing process with pid 70837 00:22:31.636 13:33:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:31.636 13:33:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70837' 00:22:31.636 13:33:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70837 00:22:31.636 13:33:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70837 00:22:31.636 13:33:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:22:31.894 00:22:31.894 real 0m7.283s 00:22:31.894 user 0m6.843s 00:22:31.894 sys 0m0.905s 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:22:31.894 ************************************ 00:22:31.894 END TEST skip_rpc_with_json 00:22:31.894 ************************************ 00:22:31.894 13:33:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:22:31.894 13:33:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:31.894 13:33:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:31.894 13:33:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:31.894 ************************************ 00:22:31.894 START TEST skip_rpc_with_delay 00:22:31.894 ************************************ 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:22:31.894 13:33:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:22:31.894 [2024-10-28 13:33:45.970960] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:22:31.894 13:33:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:22:31.894 13:33:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:31.894 13:33:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:31.894 13:33:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:31.894 00:22:31.894 real 0m0.188s 00:22:31.894 user 0m0.093s 00:22:31.894 sys 0m0.093s 00:22:31.895 13:33:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:31.895 13:33:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:22:31.895 ************************************ 00:22:31.895 END TEST skip_rpc_with_delay 00:22:31.895 ************************************ 00:22:32.152 13:33:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:22:32.152 13:33:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:22:32.152 13:33:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:22:32.152 13:33:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:32.152 13:33:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:32.152 13:33:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:32.152 ************************************ 00:22:32.152 START TEST exit_on_failed_rpc_init 00:22:32.152 ************************************ 00:22:32.152 13:33:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:22:32.152 13:33:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70948 00:22:32.152 13:33:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:32.152 13:33:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70948 00:22:32.153 13:33:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 70948 ']' 00:22:32.153 13:33:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.153 13:33:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.153 13:33:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.153 13:33:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.153 13:33:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.153 [2024-10-28 13:33:46.187526] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:22:32.153 [2024-10-28 13:33:46.187741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70948 ] 00:22:32.411 [2024-10-28 13:33:46.337575] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:32.411 [2024-10-28 13:33:46.369418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.411 [2024-10-28 13:33:46.424036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:22:33.348 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:22:33.348 [2024-10-28 13:33:47.333703] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:22:33.348 [2024-10-28 13:33:47.334004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70966 ] 00:22:33.348 [2024-10-28 13:33:47.491986] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:33.606 [2024-10-28 13:33:47.528668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.606 [2024-10-28 13:33:47.589003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.606 [2024-10-28 13:33:47.589164] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:22:33.606 [2024-10-28 13:33:47.589209] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:22:33.606 [2024-10-28 13:33:47.589244] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70948 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 70948 ']' 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 70948 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70948 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:33.606 killing process with pid 70948 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70948' 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 70948 00:22:33.606 13:33:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 70948 00:22:34.198 00:22:34.199 real 0m2.120s 00:22:34.199 user 0m2.433s 00:22:34.199 sys 0m0.614s 00:22:34.199 13:33:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:34.199 13:33:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.199 ************************************ 00:22:34.199 END TEST exit_on_failed_rpc_init 00:22:34.199 ************************************ 00:22:34.199 13:33:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:22:34.199 ************************************ 00:22:34.199 END TEST skip_rpc 00:22:34.199 ************************************ 00:22:34.199 00:22:34.199 real 0m15.469s 00:22:34.199 user 0m14.553s 00:22:34.199 sys 0m2.193s 00:22:34.199 13:33:48 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:34.199 13:33:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:34.199 13:33:48 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:22:34.199 13:33:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:34.199 13:33:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:34.199 13:33:48 -- common/autotest_common.sh@10 -- # set +x 00:22:34.199 ************************************ 00:22:34.199 START TEST rpc_client 00:22:34.199 ************************************ 00:22:34.199 13:33:48 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:22:34.463 * Looking for test storage... 00:22:34.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:22:34.463 13:33:48 rpc_client -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:34.463 13:33:48 rpc_client -- common/autotest_common.sh@1689 -- # lcov --version 00:22:34.463 13:33:48 rpc_client -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:34.463 13:33:48 rpc_client -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.463 13:33:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:22:34.463 13:33:48 rpc_client -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.463 13:33:48 rpc_client -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:34.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.463 --rc genhtml_branch_coverage=1 00:22:34.463 --rc genhtml_function_coverage=1 00:22:34.463 --rc genhtml_legend=1 00:22:34.463 --rc geninfo_all_blocks=1 00:22:34.463 --rc geninfo_unexecuted_blocks=1 00:22:34.463 00:22:34.463 ' 00:22:34.463 13:33:48 rpc_client -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:34.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.463 --rc genhtml_branch_coverage=1 00:22:34.463 --rc genhtml_function_coverage=1 00:22:34.463 --rc genhtml_legend=1 00:22:34.463 --rc geninfo_all_blocks=1 00:22:34.463 --rc geninfo_unexecuted_blocks=1 00:22:34.463 00:22:34.463 ' 00:22:34.463 13:33:48 rpc_client -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:34.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.463 --rc genhtml_branch_coverage=1 00:22:34.463 --rc genhtml_function_coverage=1 00:22:34.463 --rc genhtml_legend=1 00:22:34.463 --rc geninfo_all_blocks=1 00:22:34.463 --rc geninfo_unexecuted_blocks=1 00:22:34.463 00:22:34.463 ' 00:22:34.463 13:33:48 rpc_client -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:34.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.463 --rc genhtml_branch_coverage=1 00:22:34.463 --rc genhtml_function_coverage=1 00:22:34.463 --rc genhtml_legend=1 00:22:34.463 --rc geninfo_all_blocks=1 00:22:34.463 --rc geninfo_unexecuted_blocks=1 00:22:34.463 00:22:34.463 ' 00:22:34.463 13:33:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:22:34.463 OK 00:22:34.463 13:33:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:22:34.463 00:22:34.463 real 0m0.249s 00:22:34.463 user 0m0.148s 00:22:34.463 sys 0m0.112s 00:22:34.463 13:33:48 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:34.463 13:33:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:22:34.463 ************************************ 00:22:34.463 END TEST rpc_client 00:22:34.463 ************************************ 00:22:34.463 13:33:48 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:22:34.463 13:33:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:34.463 13:33:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:34.463 13:33:48 -- common/autotest_common.sh@10 -- # set +x 00:22:34.463 ************************************ 00:22:34.463 START TEST json_config 00:22:34.463 ************************************ 00:22:34.463 13:33:48 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:22:34.723 13:33:48 json_config -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:34.723 13:33:48 json_config -- common/autotest_common.sh@1689 -- # lcov --version 00:22:34.723 13:33:48 json_config -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:34.723 13:33:48 json_config -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:34.723 13:33:48 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.723 13:33:48 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.723 13:33:48 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.723 13:33:48 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.723 13:33:48 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.723 13:33:48 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.723 13:33:48 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.723 13:33:48 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.723 13:33:48 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.723 13:33:48 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.723 13:33:48 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.723 13:33:48 json_config -- scripts/common.sh@344 -- # case "$op" in 00:22:34.723 13:33:48 json_config -- scripts/common.sh@345 -- # : 1 00:22:34.723 13:33:48 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.723 13:33:48 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.723 13:33:48 json_config -- scripts/common.sh@365 -- # decimal 1 00:22:34.723 13:33:48 json_config -- scripts/common.sh@353 -- # local d=1 00:22:34.723 13:33:48 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.723 13:33:48 json_config -- scripts/common.sh@355 -- # echo 1 00:22:34.723 13:33:48 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.723 13:33:48 json_config -- scripts/common.sh@366 -- # decimal 2 00:22:34.723 13:33:48 json_config -- scripts/common.sh@353 -- # local d=2 00:22:34.723 13:33:48 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.723 13:33:48 json_config -- scripts/common.sh@355 -- # echo 2 00:22:34.723 13:33:48 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.723 13:33:48 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.723 13:33:48 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.723 13:33:48 json_config -- scripts/common.sh@368 -- # return 0 00:22:34.723 13:33:48 json_config -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.723 13:33:48 json_config -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:34.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.723 --rc genhtml_branch_coverage=1 00:22:34.723 --rc genhtml_function_coverage=1 00:22:34.723 --rc genhtml_legend=1 00:22:34.723 --rc geninfo_all_blocks=1 00:22:34.723 --rc geninfo_unexecuted_blocks=1 00:22:34.723 00:22:34.723 ' 00:22:34.723 13:33:48 json_config -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:34.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.723 --rc genhtml_branch_coverage=1 00:22:34.723 --rc genhtml_function_coverage=1 00:22:34.723 --rc genhtml_legend=1 00:22:34.723 --rc geninfo_all_blocks=1 00:22:34.723 --rc geninfo_unexecuted_blocks=1 00:22:34.723 00:22:34.723 ' 00:22:34.723 13:33:48 json_config -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:34.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.723 --rc genhtml_branch_coverage=1 00:22:34.723 --rc genhtml_function_coverage=1 00:22:34.723 --rc genhtml_legend=1 00:22:34.723 --rc geninfo_all_blocks=1 00:22:34.723 --rc geninfo_unexecuted_blocks=1 00:22:34.723 00:22:34.723 ' 00:22:34.723 13:33:48 json_config -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:34.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.723 --rc genhtml_branch_coverage=1 00:22:34.723 --rc genhtml_function_coverage=1 00:22:34.723 --rc genhtml_legend=1 00:22:34.723 --rc geninfo_all_blocks=1 00:22:34.723 --rc geninfo_unexecuted_blocks=1 00:22:34.723 00:22:34.723 ' 00:22:34.723 13:33:48 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:84390273-455e-4de1-ba26-b651941d9928 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=84390273-455e-4de1-ba26-b651941d9928 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.723 13:33:48 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:34.723 13:33:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.723 13:33:48 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.723 13:33:48 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.723 13:33:48 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.723 13:33:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.724 13:33:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.724 13:33:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.724 13:33:48 json_config -- paths/export.sh@5 -- # export PATH 00:22:34.724 13:33:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.724 13:33:48 json_config -- nvmf/common.sh@51 -- # : 0 00:22:34.724 13:33:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.724 13:33:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.724 13:33:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.724 13:33:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.724 13:33:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.724 13:33:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.724 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.724 13:33:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.724 13:33:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.724 13:33:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.724 13:33:48 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:22:34.724 13:33:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:22:34.724 13:33:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:22:34.724 13:33:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:22:34.724 13:33:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:22:34.724 WARNING: No tests are enabled so not running JSON configuration tests 00:22:34.724 13:33:48 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:22:34.724 13:33:48 json_config -- json_config/json_config.sh@28 -- # exit 0 00:22:34.724 00:22:34.724 real 0m0.203s 00:22:34.724 user 0m0.138s 00:22:34.724 sys 0m0.072s 00:22:34.724 13:33:48 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:34.724 13:33:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:22:34.724 ************************************ 00:22:34.724 END TEST json_config 00:22:34.724 ************************************ 00:22:34.724 13:33:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:22:34.724 13:33:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:34.724 13:33:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:34.724 13:33:48 -- common/autotest_common.sh@10 -- # set +x 00:22:34.724 ************************************ 00:22:34.724 START TEST json_config_extra_key 00:22:34.724 ************************************ 00:22:34.724 13:33:48 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:22:34.982 13:33:48 json_config_extra_key -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:34.982 13:33:48 json_config_extra_key -- common/autotest_common.sh@1689 -- # lcov --version 00:22:34.982 13:33:48 json_config_extra_key -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:34.982 13:33:48 json_config_extra_key -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:34.982 13:33:48 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.982 13:33:48 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.982 13:33:48 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:22:34.982 13:33:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:22:34.983 13:33:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.983 13:33:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:22:34.983 13:33:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.983 13:33:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.983 13:33:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.983 13:33:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:22:34.983 13:33:49 json_config_extra_key -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.983 13:33:49 json_config_extra_key -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:34.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.983 --rc genhtml_branch_coverage=1 00:22:34.983 --rc genhtml_function_coverage=1 00:22:34.983 --rc genhtml_legend=1 00:22:34.983 --rc geninfo_all_blocks=1 00:22:34.983 --rc geninfo_unexecuted_blocks=1 00:22:34.983 00:22:34.983 ' 00:22:34.983 13:33:49 json_config_extra_key -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:34.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.983 --rc genhtml_branch_coverage=1 00:22:34.983 --rc genhtml_function_coverage=1 00:22:34.983 --rc genhtml_legend=1 00:22:34.983 --rc geninfo_all_blocks=1 00:22:34.983 --rc geninfo_unexecuted_blocks=1 00:22:34.983 00:22:34.983 ' 00:22:34.983 13:33:49 json_config_extra_key -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:34.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.983 --rc genhtml_branch_coverage=1 00:22:34.983 --rc genhtml_function_coverage=1 00:22:34.983 --rc genhtml_legend=1 00:22:34.983 --rc geninfo_all_blocks=1 00:22:34.983 --rc geninfo_unexecuted_blocks=1 00:22:34.983 00:22:34.983 ' 00:22:34.983 13:33:49 json_config_extra_key -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:34.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.983 --rc genhtml_branch_coverage=1 00:22:34.983 --rc genhtml_function_coverage=1 00:22:34.983 --rc genhtml_legend=1 00:22:34.983 --rc geninfo_all_blocks=1 00:22:34.983 --rc geninfo_unexecuted_blocks=1 00:22:34.983 00:22:34.983 ' 00:22:34.983 13:33:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:84390273-455e-4de1-ba26-b651941d9928 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=84390273-455e-4de1-ba26-b651941d9928 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:34.983 13:33:49 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.983 13:33:49 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.983 13:33:49 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.983 13:33:49 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.983 13:33:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.983 13:33:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.983 13:33:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.983 13:33:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:22:34.983 13:33:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.983 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.983 13:33:49 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.983 13:33:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:22:34.983 13:33:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:22:34.983 13:33:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:22:34.983 INFO: launching applications... 00:22:34.983 13:33:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:22:34.983 13:33:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:22:34.983 13:33:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:22:34.983 13:33:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:22:34.983 13:33:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:22:34.983 13:33:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:22:34.983 13:33:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:22:34.983 13:33:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:22:34.983 13:33:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:22:34.983 13:33:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:22:34.983 13:33:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:22:34.983 13:33:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:22:34.983 13:33:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:22:34.983 13:33:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:22:34.983 13:33:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:22:34.983 13:33:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:22:34.983 13:33:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71154 00:22:34.983 13:33:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:22:34.983 Waiting for target to run... 00:22:34.983 13:33:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71154 /var/tmp/spdk_tgt.sock 00:22:34.983 13:33:49 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:22:34.983 13:33:49 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 71154 ']' 00:22:34.983 13:33:49 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:22:34.983 13:33:49 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.983 13:33:49 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:22:34.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:22:34.983 13:33:49 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.983 13:33:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:22:35.240 [2024-10-28 13:33:49.175196] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:22:35.240 [2024-10-28 13:33:49.175398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71154 ] 00:22:35.498 [2024-10-28 13:33:49.629707] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:35.754 [2024-10-28 13:33:49.662834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.754 [2024-10-28 13:33:49.699723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.319 00:22:36.319 INFO: shutting down applications... 00:22:36.319 13:33:50 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:36.319 13:33:50 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:22:36.319 13:33:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:22:36.319 13:33:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:22:36.319 13:33:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:22:36.319 13:33:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:22:36.319 13:33:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:22:36.319 13:33:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71154 ]] 00:22:36.319 13:33:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71154 00:22:36.319 13:33:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:22:36.319 13:33:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:36.319 13:33:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71154 00:22:36.319 13:33:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:22:36.577 13:33:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:22:36.577 13:33:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:36.577 13:33:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71154 00:22:36.577 13:33:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:22:37.143 13:33:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:22:37.143 13:33:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:22:37.143 13:33:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71154 00:22:37.143 13:33:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:22:37.143 13:33:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:22:37.143 SPDK target shutdown done 00:22:37.143 13:33:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:22:37.143 13:33:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:22:37.143 Success 00:22:37.143 13:33:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:22:37.143 00:22:37.143 real 0m2.353s 00:22:37.143 user 0m1.776s 00:22:37.143 sys 0m0.604s 00:22:37.143 13:33:51 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:37.143 13:33:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:22:37.143 ************************************ 00:22:37.143 END TEST json_config_extra_key 00:22:37.143 ************************************ 00:22:37.143 13:33:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:22:37.143 13:33:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:37.143 13:33:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:37.143 13:33:51 -- common/autotest_common.sh@10 -- # set +x 00:22:37.143 ************************************ 00:22:37.143 START TEST alias_rpc 00:22:37.143 ************************************ 00:22:37.143 13:33:51 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:22:37.401 * Looking for test storage... 00:22:37.401 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:22:37.401 13:33:51 alias_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:37.401 13:33:51 alias_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:22:37.401 13:33:51 alias_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:37.401 13:33:51 alias_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@345 -- # : 1 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.401 13:33:51 alias_rpc -- scripts/common.sh@368 -- # return 0 00:22:37.401 13:33:51 alias_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.401 13:33:51 alias_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:37.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.401 --rc genhtml_branch_coverage=1 00:22:37.401 --rc genhtml_function_coverage=1 00:22:37.401 --rc genhtml_legend=1 00:22:37.401 --rc geninfo_all_blocks=1 00:22:37.401 --rc geninfo_unexecuted_blocks=1 00:22:37.401 00:22:37.401 ' 00:22:37.401 13:33:51 alias_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:37.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.401 --rc genhtml_branch_coverage=1 00:22:37.401 --rc genhtml_function_coverage=1 00:22:37.401 --rc genhtml_legend=1 00:22:37.401 --rc geninfo_all_blocks=1 00:22:37.401 --rc geninfo_unexecuted_blocks=1 00:22:37.401 00:22:37.401 ' 00:22:37.401 13:33:51 alias_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:37.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.401 --rc genhtml_branch_coverage=1 00:22:37.401 --rc genhtml_function_coverage=1 00:22:37.401 --rc genhtml_legend=1 00:22:37.401 --rc geninfo_all_blocks=1 00:22:37.401 --rc geninfo_unexecuted_blocks=1 00:22:37.401 00:22:37.401 ' 00:22:37.402 13:33:51 alias_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:37.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.402 --rc genhtml_branch_coverage=1 00:22:37.402 --rc genhtml_function_coverage=1 00:22:37.402 --rc genhtml_legend=1 00:22:37.402 --rc geninfo_all_blocks=1 00:22:37.402 --rc geninfo_unexecuted_blocks=1 00:22:37.402 00:22:37.402 ' 00:22:37.402 13:33:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:22:37.402 13:33:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71240 00:22:37.402 13:33:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71240 00:22:37.402 13:33:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:37.402 13:33:51 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 71240 ']' 00:22:37.402 13:33:51 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.402 13:33:51 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:37.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.402 13:33:51 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.402 13:33:51 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:37.402 13:33:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:37.699 [2024-10-28 13:33:51.561809] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:22:37.699 [2024-10-28 13:33:51.562031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71240 ] 00:22:37.699 [2024-10-28 13:33:51.726764] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:37.699 [2024-10-28 13:33:51.757505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.699 [2024-10-28 13:33:51.816795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.634 13:33:52 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:38.634 13:33:52 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:22:38.634 13:33:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:22:38.894 13:33:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71240 00:22:38.894 13:33:52 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 71240 ']' 00:22:38.894 13:33:52 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 71240 00:22:38.894 13:33:52 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:22:38.894 13:33:52 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:38.894 13:33:52 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71240 00:22:38.894 killing process with pid 71240 00:22:38.894 13:33:52 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:38.894 13:33:52 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:38.894 13:33:52 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71240' 00:22:38.894 13:33:52 alias_rpc -- common/autotest_common.sh@969 -- # kill 71240 00:22:38.894 13:33:52 alias_rpc -- common/autotest_common.sh@974 -- # wait 71240 00:22:39.152 ************************************ 00:22:39.152 END TEST alias_rpc 00:22:39.152 ************************************ 00:22:39.152 00:22:39.152 real 0m2.047s 00:22:39.152 user 0m2.199s 00:22:39.152 sys 0m0.589s 00:22:39.152 13:33:53 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:39.152 13:33:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:39.411 13:33:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:22:39.411 13:33:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:22:39.411 13:33:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:39.411 13:33:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:39.411 13:33:53 -- common/autotest_common.sh@10 -- # set +x 00:22:39.411 ************************************ 00:22:39.411 START TEST spdkcli_tcp 00:22:39.411 ************************************ 00:22:39.411 13:33:53 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:22:39.411 * Looking for test storage... 00:22:39.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:39.411 13:33:53 spdkcli_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:39.411 13:33:53 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:22:39.411 13:33:53 spdkcli_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:39.411 13:33:53 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:39.411 13:33:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.411 13:33:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.411 13:33:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.411 13:33:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.411 13:33:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.411 13:33:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.411 13:33:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.411 13:33:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.411 13:33:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.411 13:33:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.412 13:33:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:22:39.412 13:33:53 spdkcli_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.412 13:33:53 spdkcli_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:39.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.412 --rc genhtml_branch_coverage=1 00:22:39.412 --rc genhtml_function_coverage=1 00:22:39.412 --rc genhtml_legend=1 00:22:39.412 --rc geninfo_all_blocks=1 00:22:39.412 --rc geninfo_unexecuted_blocks=1 00:22:39.412 00:22:39.412 ' 00:22:39.412 13:33:53 spdkcli_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:39.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.412 --rc genhtml_branch_coverage=1 00:22:39.412 --rc genhtml_function_coverage=1 00:22:39.412 --rc genhtml_legend=1 00:22:39.412 --rc geninfo_all_blocks=1 00:22:39.412 --rc geninfo_unexecuted_blocks=1 00:22:39.412 00:22:39.412 ' 00:22:39.412 13:33:53 spdkcli_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:39.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.412 --rc genhtml_branch_coverage=1 00:22:39.412 --rc genhtml_function_coverage=1 00:22:39.412 --rc genhtml_legend=1 00:22:39.412 --rc geninfo_all_blocks=1 00:22:39.412 --rc geninfo_unexecuted_blocks=1 00:22:39.412 00:22:39.412 ' 00:22:39.412 13:33:53 spdkcli_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:39.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.412 --rc genhtml_branch_coverage=1 00:22:39.412 --rc genhtml_function_coverage=1 00:22:39.412 --rc genhtml_legend=1 00:22:39.412 --rc geninfo_all_blocks=1 00:22:39.412 --rc geninfo_unexecuted_blocks=1 00:22:39.412 00:22:39.412 ' 00:22:39.412 13:33:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:39.412 13:33:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:39.412 13:33:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:39.412 13:33:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:22:39.412 13:33:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:22:39.412 13:33:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:39.412 13:33:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:22:39.412 13:33:53 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:39.412 13:33:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:39.412 13:33:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71327 00:22:39.412 13:33:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:22:39.412 13:33:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71327 00:22:39.412 13:33:53 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 71327 ']' 00:22:39.412 13:33:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.412 13:33:53 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:39.412 13:33:53 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.412 13:33:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:39.412 13:33:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:39.670 [2024-10-28 13:33:53.706918] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:22:39.670 [2024-10-28 13:33:53.707609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71327 ] 00:22:39.928 [2024-10-28 13:33:53.863281] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:39.928 [2024-10-28 13:33:53.896289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:39.928 [2024-10-28 13:33:53.950972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.928 [2024-10-28 13:33:53.951008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.897 13:33:54 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.897 13:33:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:22:40.897 13:33:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71344 00:22:40.897 13:33:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:22:40.897 13:33:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:22:40.897 [ 00:22:40.897 "bdev_malloc_delete", 00:22:40.897 "bdev_malloc_create", 00:22:40.897 "bdev_null_resize", 00:22:40.897 "bdev_null_delete", 00:22:40.897 "bdev_null_create", 00:22:40.897 "bdev_nvme_cuse_unregister", 00:22:40.897 "bdev_nvme_cuse_register", 00:22:40.897 "bdev_opal_new_user", 00:22:40.897 "bdev_opal_set_lock_state", 00:22:40.897 "bdev_opal_delete", 00:22:40.897 "bdev_opal_get_info", 00:22:40.897 "bdev_opal_create", 00:22:40.897 "bdev_nvme_opal_revert", 00:22:40.897 "bdev_nvme_opal_init", 00:22:40.897 "bdev_nvme_send_cmd", 00:22:40.897 "bdev_nvme_set_keys", 00:22:40.897 "bdev_nvme_get_path_iostat", 00:22:40.897 "bdev_nvme_get_mdns_discovery_info", 00:22:40.897 "bdev_nvme_stop_mdns_discovery", 00:22:40.897 "bdev_nvme_start_mdns_discovery", 00:22:40.897 "bdev_nvme_set_multipath_policy", 00:22:40.897 "bdev_nvme_set_preferred_path", 00:22:40.897 "bdev_nvme_get_io_paths", 00:22:40.897 "bdev_nvme_remove_error_injection", 00:22:40.897 "bdev_nvme_add_error_injection", 00:22:40.897 "bdev_nvme_get_discovery_info", 00:22:40.897 "bdev_nvme_stop_discovery", 00:22:40.897 "bdev_nvme_start_discovery", 00:22:40.897 "bdev_nvme_get_controller_health_info", 00:22:40.897 "bdev_nvme_disable_controller", 00:22:40.897 "bdev_nvme_enable_controller", 00:22:40.897 "bdev_nvme_reset_controller", 00:22:40.897 "bdev_nvme_get_transport_statistics", 00:22:40.897 "bdev_nvme_apply_firmware", 00:22:40.897 "bdev_nvme_detach_controller", 00:22:40.897 "bdev_nvme_get_controllers", 00:22:40.897 "bdev_nvme_attach_controller", 00:22:40.897 "bdev_nvme_set_hotplug", 00:22:40.897 "bdev_nvme_set_options", 00:22:40.897 "bdev_passthru_delete", 00:22:40.897 "bdev_passthru_create", 00:22:40.897 "bdev_lvol_set_parent_bdev", 00:22:40.897 "bdev_lvol_set_parent", 00:22:40.897 "bdev_lvol_check_shallow_copy", 00:22:40.897 "bdev_lvol_start_shallow_copy", 00:22:40.897 "bdev_lvol_grow_lvstore", 00:22:40.897 "bdev_lvol_get_lvols", 00:22:40.897 "bdev_lvol_get_lvstores", 00:22:40.897 "bdev_lvol_delete", 00:22:40.897 "bdev_lvol_set_read_only", 00:22:40.897 "bdev_lvol_resize", 00:22:40.897 "bdev_lvol_decouple_parent", 00:22:40.897 "bdev_lvol_inflate", 00:22:40.897 "bdev_lvol_rename", 00:22:40.897 "bdev_lvol_clone_bdev", 00:22:40.897 "bdev_lvol_clone", 00:22:40.897 "bdev_lvol_snapshot", 00:22:40.897 "bdev_lvol_create", 00:22:40.897 "bdev_lvol_delete_lvstore", 00:22:40.897 "bdev_lvol_rename_lvstore", 00:22:40.897 "bdev_lvol_create_lvstore", 00:22:40.897 "bdev_raid_set_options", 00:22:40.897 "bdev_raid_remove_base_bdev", 00:22:40.897 "bdev_raid_add_base_bdev", 00:22:40.897 "bdev_raid_delete", 00:22:40.897 "bdev_raid_create", 00:22:40.897 "bdev_raid_get_bdevs", 00:22:40.897 "bdev_error_inject_error", 00:22:40.897 "bdev_error_delete", 00:22:40.897 "bdev_error_create", 00:22:40.897 "bdev_split_delete", 00:22:40.897 "bdev_split_create", 00:22:40.897 "bdev_delay_delete", 00:22:40.897 "bdev_delay_create", 00:22:40.897 "bdev_delay_update_latency", 00:22:40.897 "bdev_zone_block_delete", 00:22:40.897 "bdev_zone_block_create", 00:22:40.897 "blobfs_create", 00:22:40.897 "blobfs_detect", 00:22:40.897 "blobfs_set_cache_size", 00:22:40.897 "bdev_aio_delete", 00:22:40.897 "bdev_aio_rescan", 00:22:40.897 "bdev_aio_create", 00:22:40.897 "bdev_ftl_set_property", 00:22:40.897 "bdev_ftl_get_properties", 00:22:40.897 "bdev_ftl_get_stats", 00:22:40.897 "bdev_ftl_unmap", 00:22:40.897 "bdev_ftl_unload", 00:22:40.897 "bdev_ftl_delete", 00:22:40.897 "bdev_ftl_load", 00:22:40.897 "bdev_ftl_create", 00:22:40.897 "bdev_virtio_attach_controller", 00:22:40.897 "bdev_virtio_scsi_get_devices", 00:22:40.897 "bdev_virtio_detach_controller", 00:22:40.897 "bdev_virtio_blk_set_hotplug", 00:22:40.897 "bdev_iscsi_delete", 00:22:40.897 "bdev_iscsi_create", 00:22:40.897 "bdev_iscsi_set_options", 00:22:40.897 "accel_error_inject_error", 00:22:40.897 "ioat_scan_accel_module", 00:22:40.897 "dsa_scan_accel_module", 00:22:40.897 "iaa_scan_accel_module", 00:22:40.897 "keyring_file_remove_key", 00:22:40.897 "keyring_file_add_key", 00:22:40.897 "keyring_linux_set_options", 00:22:40.897 "fsdev_aio_delete", 00:22:40.897 "fsdev_aio_create", 00:22:40.897 "iscsi_get_histogram", 00:22:40.897 "iscsi_enable_histogram", 00:22:40.897 "iscsi_set_options", 00:22:40.897 "iscsi_get_auth_groups", 00:22:40.897 "iscsi_auth_group_remove_secret", 00:22:40.897 "iscsi_auth_group_add_secret", 00:22:40.897 "iscsi_delete_auth_group", 00:22:40.897 "iscsi_create_auth_group", 00:22:40.897 "iscsi_set_discovery_auth", 00:22:40.897 "iscsi_get_options", 00:22:40.897 "iscsi_target_node_request_logout", 00:22:40.897 "iscsi_target_node_set_redirect", 00:22:40.897 "iscsi_target_node_set_auth", 00:22:40.897 "iscsi_target_node_add_lun", 00:22:40.897 "iscsi_get_stats", 00:22:40.897 "iscsi_get_connections", 00:22:40.897 "iscsi_portal_group_set_auth", 00:22:40.897 "iscsi_start_portal_group", 00:22:40.897 "iscsi_delete_portal_group", 00:22:40.897 "iscsi_create_portal_group", 00:22:40.897 "iscsi_get_portal_groups", 00:22:40.897 "iscsi_delete_target_node", 00:22:40.897 "iscsi_target_node_remove_pg_ig_maps", 00:22:40.897 "iscsi_target_node_add_pg_ig_maps", 00:22:40.897 "iscsi_create_target_node", 00:22:40.897 "iscsi_get_target_nodes", 00:22:40.897 "iscsi_delete_initiator_group", 00:22:40.898 "iscsi_initiator_group_remove_initiators", 00:22:40.898 "iscsi_initiator_group_add_initiators", 00:22:40.898 "iscsi_create_initiator_group", 00:22:40.898 "iscsi_get_initiator_groups", 00:22:40.898 "nvmf_set_crdt", 00:22:40.898 "nvmf_set_config", 00:22:40.898 "nvmf_set_max_subsystems", 00:22:40.898 "nvmf_stop_mdns_prr", 00:22:40.898 "nvmf_publish_mdns_prr", 00:22:40.898 "nvmf_subsystem_get_listeners", 00:22:40.898 "nvmf_subsystem_get_qpairs", 00:22:40.898 "nvmf_subsystem_get_controllers", 00:22:40.898 "nvmf_get_stats", 00:22:40.898 "nvmf_get_transports", 00:22:40.898 "nvmf_create_transport", 00:22:40.898 "nvmf_get_targets", 00:22:40.898 "nvmf_delete_target", 00:22:40.898 "nvmf_create_target", 00:22:40.898 "nvmf_subsystem_allow_any_host", 00:22:40.898 "nvmf_subsystem_set_keys", 00:22:40.898 "nvmf_subsystem_remove_host", 00:22:40.898 "nvmf_subsystem_add_host", 00:22:40.898 "nvmf_ns_remove_host", 00:22:40.898 "nvmf_ns_add_host", 00:22:40.898 "nvmf_subsystem_remove_ns", 00:22:40.898 "nvmf_subsystem_set_ns_ana_group", 00:22:40.898 "nvmf_subsystem_add_ns", 00:22:40.898 "nvmf_subsystem_listener_set_ana_state", 00:22:40.898 "nvmf_discovery_get_referrals", 00:22:40.898 "nvmf_discovery_remove_referral", 00:22:40.898 "nvmf_discovery_add_referral", 00:22:40.898 "nvmf_subsystem_remove_listener", 00:22:40.898 "nvmf_subsystem_add_listener", 00:22:40.898 "nvmf_delete_subsystem", 00:22:40.898 "nvmf_create_subsystem", 00:22:40.898 "nvmf_get_subsystems", 00:22:40.898 "env_dpdk_get_mem_stats", 00:22:40.898 "nbd_get_disks", 00:22:40.898 "nbd_stop_disk", 00:22:40.898 "nbd_start_disk", 00:22:40.898 "ublk_recover_disk", 00:22:40.898 "ublk_get_disks", 00:22:40.898 "ublk_stop_disk", 00:22:40.898 "ublk_start_disk", 00:22:40.898 "ublk_destroy_target", 00:22:40.898 "ublk_create_target", 00:22:40.898 "virtio_blk_create_transport", 00:22:40.898 "virtio_blk_get_transports", 00:22:40.898 "vhost_controller_set_coalescing", 00:22:40.898 "vhost_get_controllers", 00:22:40.898 "vhost_delete_controller", 00:22:40.898 "vhost_create_blk_controller", 00:22:40.898 "vhost_scsi_controller_remove_target", 00:22:40.898 "vhost_scsi_controller_add_target", 00:22:40.898 "vhost_start_scsi_controller", 00:22:40.898 "vhost_create_scsi_controller", 00:22:40.898 "thread_set_cpumask", 00:22:40.898 "scheduler_set_options", 00:22:40.898 "framework_get_governor", 00:22:40.898 "framework_get_scheduler", 00:22:40.898 "framework_set_scheduler", 00:22:40.898 "framework_get_reactors", 00:22:40.898 "thread_get_io_channels", 00:22:40.898 "thread_get_pollers", 00:22:40.898 "thread_get_stats", 00:22:40.898 "framework_monitor_context_switch", 00:22:40.898 "spdk_kill_instance", 00:22:40.898 "log_enable_timestamps", 00:22:40.898 "log_get_flags", 00:22:40.898 "log_clear_flag", 00:22:40.898 "log_set_flag", 00:22:40.898 "log_get_level", 00:22:40.898 "log_set_level", 00:22:40.898 "log_get_print_level", 00:22:40.898 "log_set_print_level", 00:22:40.898 "framework_enable_cpumask_locks", 00:22:40.898 "framework_disable_cpumask_locks", 00:22:40.898 "framework_wait_init", 00:22:40.898 "framework_start_init", 00:22:40.898 "scsi_get_devices", 00:22:40.898 "bdev_get_histogram", 00:22:40.898 "bdev_enable_histogram", 00:22:40.898 "bdev_set_qos_limit", 00:22:40.898 "bdev_set_qd_sampling_period", 00:22:40.898 "bdev_get_bdevs", 00:22:40.898 "bdev_reset_iostat", 00:22:40.898 "bdev_get_iostat", 00:22:40.898 "bdev_examine", 00:22:40.898 "bdev_wait_for_examine", 00:22:40.898 "bdev_set_options", 00:22:40.898 "accel_get_stats", 00:22:40.898 "accel_set_options", 00:22:40.898 "accel_set_driver", 00:22:40.898 "accel_crypto_key_destroy", 00:22:40.898 "accel_crypto_keys_get", 00:22:40.898 "accel_crypto_key_create", 00:22:40.898 "accel_assign_opc", 00:22:40.898 "accel_get_module_info", 00:22:40.898 "accel_get_opc_assignments", 00:22:40.898 "vmd_rescan", 00:22:40.898 "vmd_remove_device", 00:22:40.898 "vmd_enable", 00:22:40.898 "sock_get_default_impl", 00:22:40.898 "sock_set_default_impl", 00:22:40.898 "sock_impl_set_options", 00:22:40.898 "sock_impl_get_options", 00:22:40.898 "iobuf_get_stats", 00:22:40.898 "iobuf_set_options", 00:22:40.898 "keyring_get_keys", 00:22:40.898 "framework_get_pci_devices", 00:22:40.898 "framework_get_config", 00:22:40.898 "framework_get_subsystems", 00:22:40.898 "fsdev_set_opts", 00:22:40.898 "fsdev_get_opts", 00:22:40.898 "trace_get_info", 00:22:40.898 "trace_get_tpoint_group_mask", 00:22:40.898 "trace_disable_tpoint_group", 00:22:40.898 "trace_enable_tpoint_group", 00:22:40.898 "trace_clear_tpoint_mask", 00:22:40.898 "trace_set_tpoint_mask", 00:22:40.898 "notify_get_notifications", 00:22:40.898 "notify_get_types", 00:22:40.898 "spdk_get_version", 00:22:40.898 "rpc_get_methods" 00:22:40.898 ] 00:22:40.898 13:33:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:22:40.898 13:33:54 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:40.898 13:33:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:40.898 13:33:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:40.898 13:33:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71327 00:22:40.898 13:33:55 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 71327 ']' 00:22:40.898 13:33:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 71327 00:22:40.898 13:33:55 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:22:40.898 13:33:55 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:40.898 13:33:55 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71327 00:22:41.157 killing process with pid 71327 00:22:41.157 13:33:55 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:41.157 13:33:55 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:41.157 13:33:55 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71327' 00:22:41.157 13:33:55 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 71327 00:22:41.157 13:33:55 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 71327 00:22:41.416 ************************************ 00:22:41.416 END TEST spdkcli_tcp 00:22:41.416 ************************************ 00:22:41.416 00:22:41.416 real 0m2.187s 00:22:41.416 user 0m3.878s 00:22:41.416 sys 0m0.667s 00:22:41.416 13:33:55 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:41.416 13:33:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:41.674 13:33:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:22:41.674 13:33:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:41.675 13:33:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:41.675 13:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.675 ************************************ 00:22:41.675 START TEST dpdk_mem_utility 00:22:41.675 ************************************ 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:22:41.675 * Looking for test storage... 00:22:41.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lcov --version 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:22:41.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:41.675 13:33:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:41.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.675 --rc genhtml_branch_coverage=1 00:22:41.675 --rc genhtml_function_coverage=1 00:22:41.675 --rc genhtml_legend=1 00:22:41.675 --rc geninfo_all_blocks=1 00:22:41.675 --rc geninfo_unexecuted_blocks=1 00:22:41.675 00:22:41.675 ' 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:41.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.675 --rc genhtml_branch_coverage=1 00:22:41.675 --rc genhtml_function_coverage=1 00:22:41.675 --rc genhtml_legend=1 00:22:41.675 --rc geninfo_all_blocks=1 00:22:41.675 --rc geninfo_unexecuted_blocks=1 00:22:41.675 00:22:41.675 ' 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:41.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.675 --rc genhtml_branch_coverage=1 00:22:41.675 --rc genhtml_function_coverage=1 00:22:41.675 --rc genhtml_legend=1 00:22:41.675 --rc geninfo_all_blocks=1 00:22:41.675 --rc geninfo_unexecuted_blocks=1 00:22:41.675 00:22:41.675 ' 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:41.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.675 --rc genhtml_branch_coverage=1 00:22:41.675 --rc genhtml_function_coverage=1 00:22:41.675 --rc genhtml_legend=1 00:22:41.675 --rc geninfo_all_blocks=1 00:22:41.675 --rc geninfo_unexecuted_blocks=1 00:22:41.675 00:22:41.675 ' 00:22:41.675 13:33:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:22:41.675 13:33:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71427 00:22:41.675 13:33:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71427 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 71427 ']' 00:22:41.675 13:33:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:41.675 13:33:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:22:41.933 [2024-10-28 13:33:55.923014] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:22:41.933 [2024-10-28 13:33:55.923201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71427 ] 00:22:41.933 [2024-10-28 13:33:56.068445] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:42.191 [2024-10-28 13:33:56.098426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.191 [2024-10-28 13:33:56.148952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.758 13:33:56 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.758 13:33:56 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:22:42.758 13:33:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:22:42.758 13:33:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:22:42.758 13:33:56 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.758 13:33:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:22:42.758 { 00:22:42.758 "filename": "/tmp/spdk_mem_dump.txt" 00:22:42.758 } 00:22:42.758 13:33:56 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.758 13:33:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:22:43.018 DPDK memory size 810.000000 MiB in 1 heap(s) 00:22:43.018 1 heaps totaling size 810.000000 MiB 00:22:43.018 size: 810.000000 MiB heap id: 0 00:22:43.018 end heaps---------- 00:22:43.018 9 mempools totaling size 595.772034 MiB 00:22:43.018 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:22:43.018 size: 158.602051 MiB name: PDU_data_out_Pool 00:22:43.018 size: 92.545471 MiB name: bdev_io_71427 00:22:43.018 size: 50.003479 MiB name: msgpool_71427 00:22:43.018 size: 36.509338 MiB name: fsdev_io_71427 00:22:43.018 size: 21.763794 MiB name: PDU_Pool 00:22:43.018 size: 19.513306 MiB name: SCSI_TASK_Pool 00:22:43.018 size: 4.133484 MiB name: evtpool_71427 00:22:43.018 size: 0.026123 MiB name: Session_Pool 00:22:43.018 end mempools------- 00:22:43.018 6 memzones totaling size 4.142822 MiB 00:22:43.018 size: 1.000366 MiB name: RG_ring_0_71427 00:22:43.018 size: 1.000366 MiB name: RG_ring_1_71427 00:22:43.018 size: 1.000366 MiB name: RG_ring_4_71427 00:22:43.018 size: 1.000366 MiB name: RG_ring_5_71427 00:22:43.018 size: 0.125366 MiB name: RG_ring_2_71427 00:22:43.018 size: 0.015991 MiB name: RG_ring_3_71427 00:22:43.018 end memzones------- 00:22:43.018 13:33:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:22:43.018 heap id: 0 total size: 810.000000 MiB number of busy elements: 311 number of free elements: 15 00:22:43.018 list of free elements. size: 10.696411 MiB 00:22:43.018 element at address: 0x200018a00000 with size: 0.999878 MiB 00:22:43.018 element at address: 0x200018c00000 with size: 0.999878 MiB 00:22:43.018 element at address: 0x200031800000 with size: 0.994446 MiB 00:22:43.018 element at address: 0x200000400000 with size: 0.993958 MiB 00:22:43.018 element at address: 0x200006400000 with size: 0.959839 MiB 00:22:43.018 element at address: 0x200012c00000 with size: 0.954285 MiB 00:22:43.018 element at address: 0x200018e00000 with size: 0.936584 MiB 00:22:43.018 element at address: 0x200000200000 with size: 0.600159 MiB 00:22:43.018 element at address: 0x20001a600000 with size: 0.568054 MiB 00:22:43.018 element at address: 0x20000a600000 with size: 0.488892 MiB 00:22:43.018 element at address: 0x200000c00000 with size: 0.487000 MiB 00:22:43.018 element at address: 0x200019000000 with size: 0.485657 MiB 00:22:43.018 element at address: 0x200003e00000 with size: 0.480286 MiB 00:22:43.018 element at address: 0x200027a00000 with size: 0.395752 MiB 00:22:43.018 element at address: 0x200000800000 with size: 0.351746 MiB 00:22:43.018 list of standard malloc elements. size: 199.384705 MiB 00:22:43.018 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:22:43.018 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:22:43.018 element at address: 0x200018afff80 with size: 1.000122 MiB 00:22:43.018 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:22:43.018 element at address: 0x200018efff80 with size: 1.000122 MiB 00:22:43.018 element at address: 0x2000003bbf00 with size: 0.257935 MiB 00:22:43.018 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:22:43.018 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:22:43.018 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:22:43.019 element at address: 0x2000002b9c40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000003bbe40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000085e580 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087e840 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087e900 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087f080 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087f140 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087f200 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087f380 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087f440 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087f500 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000087f680 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000cff000 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200003efb980 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:22:43.019 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a691780 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a691840 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a691900 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a692080 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a692140 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a692200 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a692380 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a692440 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a692500 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a692680 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a692740 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a692800 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:22:43.019 element at address: 0x20001a692980 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693040 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693100 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693280 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693340 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693400 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693580 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693640 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693700 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693880 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693940 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694000 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694180 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694240 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694300 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694480 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694540 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694600 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694780 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694840 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694900 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a695080 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a695140 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a695200 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a695380 with size: 0.000183 MiB 00:22:43.020 element at address: 0x20001a695440 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a65500 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:22:43.020 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:22:43.020 list of memzone associated elements. size: 599.918884 MiB 00:22:43.020 element at address: 0x20001a695500 with size: 211.416748 MiB 00:22:43.020 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:22:43.020 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:22:43.020 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:22:43.020 element at address: 0x200012df4780 with size: 92.045044 MiB 00:22:43.020 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71427_0 00:22:43.020 element at address: 0x200000dff380 with size: 48.003052 MiB 00:22:43.020 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71427_0 00:22:43.020 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:22:43.020 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71427_0 00:22:43.020 element at address: 0x2000191be940 with size: 20.255554 MiB 00:22:43.020 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:22:43.020 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:22:43.020 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:22:43.020 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:22:43.020 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71427_0 00:22:43.020 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:22:43.020 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71427 00:22:43.021 element at address: 0x2000002b9d00 with size: 1.008118 MiB 00:22:43.021 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71427 00:22:43.021 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:22:43.021 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:22:43.021 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:22:43.021 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:22:43.021 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:22:43.021 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:22:43.021 element at address: 0x200003efba40 with size: 1.008118 MiB 00:22:43.021 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:22:43.021 element at address: 0x200000cff180 with size: 1.000488 MiB 00:22:43.021 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71427 00:22:43.021 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:22:43.021 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71427 00:22:43.021 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:22:43.021 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71427 00:22:43.021 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:22:43.021 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71427 00:22:43.021 element at address: 0x20000087f740 with size: 0.500488 MiB 00:22:43.021 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71427 00:22:43.021 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:22:43.021 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71427 00:22:43.021 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:22:43.021 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:22:43.021 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:22:43.021 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:22:43.021 element at address: 0x20001907c540 with size: 0.250488 MiB 00:22:43.021 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:22:43.021 element at address: 0x200000299a40 with size: 0.125488 MiB 00:22:43.021 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71427 00:22:43.021 element at address: 0x20000085e640 with size: 0.125488 MiB 00:22:43.021 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71427 00:22:43.021 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:22:43.021 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:22:43.021 element at address: 0x200027a65680 with size: 0.023743 MiB 00:22:43.021 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:22:43.021 element at address: 0x20000085a380 with size: 0.016113 MiB 00:22:43.021 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71427 00:22:43.021 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:22:43.021 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:22:43.021 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:22:43.021 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71427 00:22:43.021 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:22:43.021 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71427 00:22:43.021 element at address: 0x20000085a180 with size: 0.000305 MiB 00:22:43.021 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71427 00:22:43.021 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:22:43.021 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:22:43.021 13:33:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:22:43.021 13:33:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71427 00:22:43.021 13:33:57 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 71427 ']' 00:22:43.021 13:33:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 71427 00:22:43.021 13:33:57 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:22:43.021 13:33:57 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.021 13:33:57 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71427 00:22:43.021 killing process with pid 71427 00:22:43.021 13:33:57 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:43.021 13:33:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:43.021 13:33:57 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71427' 00:22:43.021 13:33:57 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 71427 00:22:43.021 13:33:57 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 71427 00:22:43.587 00:22:43.587 real 0m1.953s 00:22:43.587 user 0m2.033s 00:22:43.587 sys 0m0.586s 00:22:43.587 13:33:57 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:43.587 ************************************ 00:22:43.587 END TEST dpdk_mem_utility 00:22:43.587 ************************************ 00:22:43.587 13:33:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:22:43.588 13:33:57 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:22:43.588 13:33:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:43.588 13:33:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:43.588 13:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:43.588 ************************************ 00:22:43.588 START TEST event 00:22:43.588 ************************************ 00:22:43.588 13:33:57 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:22:43.588 * Looking for test storage... 00:22:43.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:22:43.588 13:33:57 event -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:43.588 13:33:57 event -- common/autotest_common.sh@1689 -- # lcov --version 00:22:43.588 13:33:57 event -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:43.847 13:33:57 event -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:43.847 13:33:57 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.847 13:33:57 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.847 13:33:57 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.847 13:33:57 event -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.847 13:33:57 event -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.847 13:33:57 event -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.847 13:33:57 event -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.847 13:33:57 event -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.847 13:33:57 event -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.847 13:33:57 event -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.847 13:33:57 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.847 13:33:57 event -- scripts/common.sh@344 -- # case "$op" in 00:22:43.847 13:33:57 event -- scripts/common.sh@345 -- # : 1 00:22:43.847 13:33:57 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.847 13:33:57 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.847 13:33:57 event -- scripts/common.sh@365 -- # decimal 1 00:22:43.847 13:33:57 event -- scripts/common.sh@353 -- # local d=1 00:22:43.847 13:33:57 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.847 13:33:57 event -- scripts/common.sh@355 -- # echo 1 00:22:43.847 13:33:57 event -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.847 13:33:57 event -- scripts/common.sh@366 -- # decimal 2 00:22:43.847 13:33:57 event -- scripts/common.sh@353 -- # local d=2 00:22:43.847 13:33:57 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.847 13:33:57 event -- scripts/common.sh@355 -- # echo 2 00:22:43.847 13:33:57 event -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.847 13:33:57 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.847 13:33:57 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.847 13:33:57 event -- scripts/common.sh@368 -- # return 0 00:22:43.847 13:33:57 event -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.847 13:33:57 event -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:43.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.847 --rc genhtml_branch_coverage=1 00:22:43.847 --rc genhtml_function_coverage=1 00:22:43.847 --rc genhtml_legend=1 00:22:43.847 --rc geninfo_all_blocks=1 00:22:43.847 --rc geninfo_unexecuted_blocks=1 00:22:43.847 00:22:43.847 ' 00:22:43.847 13:33:57 event -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:43.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.847 --rc genhtml_branch_coverage=1 00:22:43.847 --rc genhtml_function_coverage=1 00:22:43.847 --rc genhtml_legend=1 00:22:43.847 --rc geninfo_all_blocks=1 00:22:43.847 --rc geninfo_unexecuted_blocks=1 00:22:43.847 00:22:43.847 ' 00:22:43.847 13:33:57 event -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:43.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.847 --rc genhtml_branch_coverage=1 00:22:43.847 --rc genhtml_function_coverage=1 00:22:43.847 --rc genhtml_legend=1 00:22:43.847 --rc geninfo_all_blocks=1 00:22:43.847 --rc geninfo_unexecuted_blocks=1 00:22:43.847 00:22:43.847 ' 00:22:43.847 13:33:57 event -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:43.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.847 --rc genhtml_branch_coverage=1 00:22:43.847 --rc genhtml_function_coverage=1 00:22:43.847 --rc genhtml_legend=1 00:22:43.847 --rc geninfo_all_blocks=1 00:22:43.847 --rc geninfo_unexecuted_blocks=1 00:22:43.847 00:22:43.847 ' 00:22:43.847 13:33:57 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:43.847 13:33:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:22:43.847 13:33:57 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:22:43.847 13:33:57 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:22:43.847 13:33:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:43.847 13:33:57 event -- common/autotest_common.sh@10 -- # set +x 00:22:43.847 ************************************ 00:22:43.847 START TEST event_perf 00:22:43.847 ************************************ 00:22:43.847 13:33:57 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:22:43.847 Running I/O for 1 seconds...[2024-10-28 13:33:57.853750] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:22:43.847 [2024-10-28 13:33:57.854175] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71513 ] 00:22:44.111 [2024-10-28 13:33:58.010811] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:44.111 [2024-10-28 13:33:58.038310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:44.111 [2024-10-28 13:33:58.093942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.111 [2024-10-28 13:33:58.094105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.111 [2024-10-28 13:33:58.094214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.111 Running I/O for 1 seconds...[2024-10-28 13:33:58.094278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.044 00:22:45.044 lcore 0: 189254 00:22:45.044 lcore 1: 189254 00:22:45.044 lcore 2: 189254 00:22:45.044 lcore 3: 189254 00:22:45.044 done. 00:22:45.044 ************************************ 00:22:45.044 END TEST event_perf 00:22:45.044 ************************************ 00:22:45.044 00:22:45.044 real 0m1.354s 00:22:45.044 user 0m4.110s 00:22:45.044 sys 0m0.117s 00:22:45.044 13:33:59 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:45.044 13:33:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:22:45.302 13:33:59 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:22:45.302 13:33:59 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:45.302 13:33:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:45.302 13:33:59 event -- common/autotest_common.sh@10 -- # set +x 00:22:45.302 ************************************ 00:22:45.302 START TEST event_reactor 00:22:45.302 ************************************ 00:22:45.302 13:33:59 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:22:45.302 [2024-10-28 13:33:59.259899] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:22:45.302 [2024-10-28 13:33:59.260408] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71548 ] 00:22:45.302 [2024-10-28 13:33:59.413817] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:45.302 [2024-10-28 13:33:59.439098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.561 [2024-10-28 13:33:59.489706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.496 test_start 00:22:46.496 oneshot 00:22:46.496 tick 100 00:22:46.496 tick 100 00:22:46.496 tick 250 00:22:46.496 tick 100 00:22:46.496 tick 100 00:22:46.496 tick 100 00:22:46.496 tick 250 00:22:46.496 tick 500 00:22:46.496 tick 100 00:22:46.496 tick 100 00:22:46.496 tick 250 00:22:46.496 tick 100 00:22:46.496 tick 100 00:22:46.496 test_end 00:22:46.496 00:22:46.496 real 0m1.331s 00:22:46.496 user 0m1.128s 00:22:46.496 sys 0m0.095s 00:22:46.496 ************************************ 00:22:46.496 END TEST event_reactor 00:22:46.496 ************************************ 00:22:46.496 13:34:00 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:46.496 13:34:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:22:46.496 13:34:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:22:46.496 13:34:00 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:46.496 13:34:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:46.496 13:34:00 event -- common/autotest_common.sh@10 -- # set +x 00:22:46.496 ************************************ 00:22:46.496 START TEST event_reactor_perf 00:22:46.496 ************************************ 00:22:46.496 13:34:00 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:22:46.496 [2024-10-28 13:34:00.645466] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:22:46.496 [2024-10-28 13:34:00.645687] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71590 ] 00:22:46.754 [2024-10-28 13:34:00.799026] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:46.754 [2024-10-28 13:34:00.831113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.754 [2024-10-28 13:34:00.877765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.127 test_start 00:22:48.127 test_end 00:22:48.127 Performance: 289687 events per second 00:22:48.127 00:22:48.127 real 0m1.334s 00:22:48.127 user 0m1.129s 00:22:48.127 sys 0m0.097s 00:22:48.127 ************************************ 00:22:48.127 END TEST event_reactor_perf 00:22:48.127 ************************************ 00:22:48.127 13:34:01 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:48.127 13:34:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:22:48.127 13:34:01 event -- event/event.sh@49 -- # uname -s 00:22:48.127 13:34:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:22:48.127 13:34:01 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:22:48.127 13:34:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:48.127 13:34:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:48.127 13:34:01 event -- common/autotest_common.sh@10 -- # set +x 00:22:48.127 ************************************ 00:22:48.127 START TEST event_scheduler 00:22:48.127 ************************************ 00:22:48.128 13:34:01 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:22:48.128 * Looking for test storage... 00:22:48.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@1689 -- # lcov --version 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:48.128 13:34:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:48.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.128 --rc genhtml_branch_coverage=1 00:22:48.128 --rc genhtml_function_coverage=1 00:22:48.128 --rc genhtml_legend=1 00:22:48.128 --rc geninfo_all_blocks=1 00:22:48.128 --rc geninfo_unexecuted_blocks=1 00:22:48.128 00:22:48.128 ' 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:48.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.128 --rc genhtml_branch_coverage=1 00:22:48.128 --rc genhtml_function_coverage=1 00:22:48.128 --rc genhtml_legend=1 00:22:48.128 --rc geninfo_all_blocks=1 00:22:48.128 --rc geninfo_unexecuted_blocks=1 00:22:48.128 00:22:48.128 ' 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:48.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.128 --rc genhtml_branch_coverage=1 00:22:48.128 --rc genhtml_function_coverage=1 00:22:48.128 --rc genhtml_legend=1 00:22:48.128 --rc geninfo_all_blocks=1 00:22:48.128 --rc geninfo_unexecuted_blocks=1 00:22:48.128 00:22:48.128 ' 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:48.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.128 --rc genhtml_branch_coverage=1 00:22:48.128 --rc genhtml_function_coverage=1 00:22:48.128 --rc genhtml_legend=1 00:22:48.128 --rc geninfo_all_blocks=1 00:22:48.128 --rc geninfo_unexecuted_blocks=1 00:22:48.128 00:22:48.128 ' 00:22:48.128 13:34:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:22:48.128 13:34:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71657 00:22:48.128 13:34:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:22:48.128 13:34:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:22:48.128 13:34:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71657 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 71657 ']' 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:48.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:48.128 13:34:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:22:48.387 [2024-10-28 13:34:02.300338] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:22:48.387 [2024-10-28 13:34:02.300548] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71657 ] 00:22:48.387 [2024-10-28 13:34:02.459915] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:48.387 [2024-10-28 13:34:02.493800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.645 [2024-10-28 13:34:02.562636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.645 [2024-10-28 13:34:02.562809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.645 [2024-10-28 13:34:02.563621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.645 [2024-10-28 13:34:02.563769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.212 13:34:03 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:49.212 13:34:03 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:22:49.213 13:34:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:22:49.213 13:34:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.213 13:34:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:22:49.213 POWER: acpi-cpufreq driver is not supported 00:22:49.213 POWER: intel_pstate driver is not supported 00:22:49.213 POWER: amd-pstate driver is not supported 00:22:49.213 POWER: cppc_cpufreq driver is not supported 00:22:49.213 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:22:49.213 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:22:49.213 POWER: Unable to set Power Management Environment for lcore 0 00:22:49.213 [2024-10-28 13:34:03.294456] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:22:49.213 [2024-10-28 13:34:03.294483] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:22:49.213 [2024-10-28 13:34:03.294500] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:22:49.213 [2024-10-28 13:34:03.294519] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:22:49.213 [2024-10-28 13:34:03.294534] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:22:49.213 [2024-10-28 13:34:03.294546] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:22:49.213 13:34:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.213 13:34:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:22:49.213 13:34:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.213 13:34:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:22:49.471 [2024-10-28 13:34:03.393262] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:22:49.471 13:34:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.472 13:34:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:22:49.472 13:34:03 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:49.472 13:34:03 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:49.472 13:34:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:22:49.472 ************************************ 00:22:49.472 START TEST scheduler_create_thread 00:22:49.472 ************************************ 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:49.472 2 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:49.472 3 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:49.472 4 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:49.472 5 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:49.472 6 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:49.472 7 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:49.472 8 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:49.472 9 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:49.472 10 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.472 13:34:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:50.849 13:34:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.849 13:34:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:22:50.849 13:34:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:22:50.849 13:34:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.849 13:34:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:52.234 ************************************ 00:22:52.234 END TEST scheduler_create_thread 00:22:52.234 ************************************ 00:22:52.234 13:34:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.234 00:22:52.234 real 0m2.616s 00:22:52.234 user 0m0.018s 00:22:52.234 sys 0m0.008s 00:22:52.234 13:34:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:52.234 13:34:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:22:52.234 13:34:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:52.234 13:34:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71657 00:22:52.234 13:34:06 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 71657 ']' 00:22:52.234 13:34:06 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 71657 00:22:52.234 13:34:06 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:22:52.234 13:34:06 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:52.234 13:34:06 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71657 00:22:52.234 killing process with pid 71657 00:22:52.234 13:34:06 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:52.234 13:34:06 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:52.234 13:34:06 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71657' 00:22:52.234 13:34:06 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 71657 00:22:52.234 13:34:06 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 71657 00:22:52.492 [2024-10-28 13:34:06.503099] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:22:52.751 ************************************ 00:22:52.751 END TEST event_scheduler 00:22:52.751 ************************************ 00:22:52.751 00:22:52.751 real 0m4.763s 00:22:52.751 user 0m8.767s 00:22:52.751 sys 0m0.491s 00:22:52.751 13:34:06 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:52.751 13:34:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:22:52.751 13:34:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:22:52.751 13:34:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:22:52.751 13:34:06 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:52.751 13:34:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:52.751 13:34:06 event -- common/autotest_common.sh@10 -- # set +x 00:22:52.751 ************************************ 00:22:52.751 START TEST app_repeat 00:22:52.751 ************************************ 00:22:52.751 13:34:06 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:22:52.751 13:34:06 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:52.751 13:34:06 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:52.751 13:34:06 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:22:52.751 13:34:06 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:22:52.751 13:34:06 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:22:52.751 13:34:06 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:22:52.751 13:34:06 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:22:52.751 13:34:06 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71763 00:22:52.751 13:34:06 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:22:52.751 13:34:06 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:22:52.751 Process app_repeat pid: 71763 00:22:52.751 13:34:06 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71763' 00:22:52.751 spdk_app_start Round 0 00:22:52.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:52.751 13:34:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:22:52.751 13:34:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:22:52.751 13:34:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71763 /var/tmp/spdk-nbd.sock 00:22:52.751 13:34:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71763 ']' 00:22:52.751 13:34:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:52.751 13:34:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:52.751 13:34:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:52.751 13:34:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:52.751 13:34:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:22:52.751 [2024-10-28 13:34:06.902628] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:22:52.751 [2024-10-28 13:34:06.904586] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71763 ] 00:22:53.009 [2024-10-28 13:34:07.058757] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:53.009 [2024-10-28 13:34:07.086506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:53.009 [2024-10-28 13:34:07.144504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.009 [2024-10-28 13:34:07.144546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.945 13:34:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.945 13:34:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:22:53.945 13:34:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:22:54.203 Malloc0 00:22:54.203 13:34:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:22:54.463 Malloc1 00:22:54.463 13:34:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:54.463 13:34:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:22:55.049 /dev/nbd0 00:22:55.049 13:34:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:55.049 13:34:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:55.049 13:34:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:55.049 13:34:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:22:55.049 13:34:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:55.049 13:34:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:55.049 13:34:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:55.049 13:34:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:22:55.049 13:34:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:55.049 13:34:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:55.049 13:34:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:22:55.049 1+0 records in 00:22:55.049 1+0 records out 00:22:55.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297941 s, 13.7 MB/s 00:22:55.049 13:34:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:55.049 13:34:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:22:55.049 13:34:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:55.049 13:34:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:55.049 13:34:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:22:55.049 13:34:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:55.049 13:34:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:55.049 13:34:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:22:55.352 /dev/nbd1 00:22:55.352 13:34:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:55.352 13:34:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:55.352 13:34:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:22:55.352 13:34:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:22:55.352 13:34:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:55.352 13:34:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:55.352 13:34:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:22:55.352 13:34:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:22:55.352 13:34:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:55.352 13:34:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:55.352 13:34:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:22:55.352 1+0 records in 00:22:55.352 1+0 records out 00:22:55.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366741 s, 11.2 MB/s 00:22:55.352 13:34:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:55.352 13:34:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:22:55.352 13:34:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:55.352 13:34:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:55.352 13:34:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:22:55.352 13:34:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:55.352 13:34:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:55.352 13:34:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:55.352 13:34:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:55.352 13:34:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:55.613 { 00:22:55.613 "nbd_device": "/dev/nbd0", 00:22:55.613 "bdev_name": "Malloc0" 00:22:55.613 }, 00:22:55.613 { 00:22:55.613 "nbd_device": "/dev/nbd1", 00:22:55.613 "bdev_name": "Malloc1" 00:22:55.613 } 00:22:55.613 ]' 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:55.613 { 00:22:55.613 "nbd_device": "/dev/nbd0", 00:22:55.613 "bdev_name": "Malloc0" 00:22:55.613 }, 00:22:55.613 { 00:22:55.613 "nbd_device": "/dev/nbd1", 00:22:55.613 "bdev_name": "Malloc1" 00:22:55.613 } 00:22:55.613 ]' 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:22:55.613 /dev/nbd1' 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:22:55.613 /dev/nbd1' 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:22:55.613 256+0 records in 00:22:55.613 256+0 records out 00:22:55.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102456 s, 102 MB/s 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:55.613 256+0 records in 00:22:55.613 256+0 records out 00:22:55.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272862 s, 38.4 MB/s 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:22:55.613 256+0 records in 00:22:55.613 256+0 records out 00:22:55.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0423235 s, 24.8 MB/s 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:55.613 13:34:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:22:55.614 13:34:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:22:55.614 13:34:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:22:55.614 13:34:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:55.614 13:34:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:55.614 13:34:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:55.614 13:34:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:22:55.614 13:34:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:55.614 13:34:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:56.183 13:34:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:56.183 13:34:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:56.183 13:34:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:56.183 13:34:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:56.183 13:34:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:56.183 13:34:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:56.183 13:34:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:22:56.183 13:34:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:22:56.183 13:34:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:56.183 13:34:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:22:56.441 13:34:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:56.441 13:34:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:56.441 13:34:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:56.441 13:34:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:56.441 13:34:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:56.441 13:34:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:56.441 13:34:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:22:56.441 13:34:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:22:56.441 13:34:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:56.441 13:34:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:56.441 13:34:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:56.698 13:34:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:56.698 13:34:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:56.698 13:34:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:56.698 13:34:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:56.698 13:34:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:22:56.698 13:34:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:56.698 13:34:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:22:56.698 13:34:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:22:56.698 13:34:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:22:56.698 13:34:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:22:56.699 13:34:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:56.699 13:34:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:22:56.699 13:34:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:22:56.957 13:34:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:22:57.214 [2024-10-28 13:34:11.288194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:57.214 [2024-10-28 13:34:11.339930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.214 [2024-10-28 13:34:11.339935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.472 [2024-10-28 13:34:11.397179] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:22:57.472 [2024-10-28 13:34:11.397290] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:00.034 13:34:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:23:00.034 spdk_app_start Round 1 00:23:00.034 13:34:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:23:00.034 13:34:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71763 /var/tmp/spdk-nbd.sock 00:23:00.034 13:34:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71763 ']' 00:23:00.034 13:34:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:00.034 13:34:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:00.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:00.034 13:34:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:00.034 13:34:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:00.034 13:34:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:00.293 13:34:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:00.293 13:34:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:23:00.293 13:34:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:00.858 Malloc0 00:23:00.858 13:34:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:01.115 Malloc1 00:23:01.115 13:34:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:01.115 13:34:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:01.372 /dev/nbd0 00:23:01.372 13:34:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:01.372 13:34:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:01.372 13:34:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:01.372 13:34:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:23:01.372 13:34:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:01.372 13:34:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:01.372 13:34:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:01.372 13:34:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:23:01.372 13:34:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:01.372 13:34:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:01.372 13:34:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:01.372 1+0 records in 00:23:01.372 1+0 records out 00:23:01.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248089 s, 16.5 MB/s 00:23:01.372 13:34:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:01.372 13:34:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:23:01.372 13:34:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:01.373 13:34:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:01.373 13:34:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:23:01.373 13:34:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:01.373 13:34:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:01.373 13:34:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:01.631 /dev/nbd1 00:23:01.889 13:34:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:01.889 13:34:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:01.889 13:34:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:01.889 13:34:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:23:01.889 13:34:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:01.889 13:34:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:01.889 13:34:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:01.889 13:34:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:23:01.889 13:34:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:01.889 13:34:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:01.889 13:34:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:01.889 1+0 records in 00:23:01.889 1+0 records out 00:23:01.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326858 s, 12.5 MB/s 00:23:01.889 13:34:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:01.889 13:34:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:23:01.889 13:34:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:01.889 13:34:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:01.889 13:34:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:23:01.889 13:34:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:01.889 13:34:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:01.889 13:34:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:01.889 13:34:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:01.889 13:34:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:02.147 { 00:23:02.147 "nbd_device": "/dev/nbd0", 00:23:02.147 "bdev_name": "Malloc0" 00:23:02.147 }, 00:23:02.147 { 00:23:02.147 "nbd_device": "/dev/nbd1", 00:23:02.147 "bdev_name": "Malloc1" 00:23:02.147 } 00:23:02.147 ]' 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:02.147 { 00:23:02.147 "nbd_device": "/dev/nbd0", 00:23:02.147 "bdev_name": "Malloc0" 00:23:02.147 }, 00:23:02.147 { 00:23:02.147 "nbd_device": "/dev/nbd1", 00:23:02.147 "bdev_name": "Malloc1" 00:23:02.147 } 00:23:02.147 ]' 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:02.147 /dev/nbd1' 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:02.147 /dev/nbd1' 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:02.147 256+0 records in 00:23:02.147 256+0 records out 00:23:02.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00733071 s, 143 MB/s 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:02.147 256+0 records in 00:23:02.147 256+0 records out 00:23:02.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257013 s, 40.8 MB/s 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:02.147 256+0 records in 00:23:02.147 256+0 records out 00:23:02.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0348895 s, 30.1 MB/s 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:02.147 13:34:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:02.406 13:34:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:02.406 13:34:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:02.406 13:34:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:02.406 13:34:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:02.406 13:34:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:02.406 13:34:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:02.406 13:34:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:02.406 13:34:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:02.406 13:34:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:02.406 13:34:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:02.973 13:34:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:02.973 13:34:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:02.973 13:34:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:02.973 13:34:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:02.973 13:34:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:02.973 13:34:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:02.973 13:34:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:02.973 13:34:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:02.973 13:34:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:02.973 13:34:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:02.973 13:34:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:03.231 13:34:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:03.231 13:34:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:03.231 13:34:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:03.231 13:34:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:03.231 13:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:23:03.231 13:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:03.231 13:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:23:03.231 13:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:23:03.231 13:34:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:23:03.231 13:34:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:23:03.231 13:34:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:03.231 13:34:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:23:03.231 13:34:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:03.796 13:34:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:23:03.796 [2024-10-28 13:34:17.817448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:03.796 [2024-10-28 13:34:17.870316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.796 [2024-10-28 13:34:17.870317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.796 [2024-10-28 13:34:17.928080] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:03.796 [2024-10-28 13:34:17.928188] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:07.084 spdk_app_start Round 2 00:23:07.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:07.084 13:34:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:23:07.084 13:34:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:23:07.084 13:34:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71763 /var/tmp/spdk-nbd.sock 00:23:07.084 13:34:20 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71763 ']' 00:23:07.084 13:34:20 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:07.084 13:34:20 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:07.084 13:34:20 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:07.084 13:34:20 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:07.084 13:34:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:07.084 13:34:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:07.084 13:34:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:23:07.084 13:34:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:07.343 Malloc0 00:23:07.343 13:34:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:23:07.602 Malloc1 00:23:07.602 13:34:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:07.602 13:34:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:23:07.862 /dev/nbd0 00:23:07.862 13:34:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:07.862 13:34:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:07.862 13:34:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:07.862 13:34:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:23:07.862 13:34:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:07.862 13:34:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:07.862 13:34:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:08.121 13:34:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:23:08.121 13:34:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:08.121 13:34:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:08.121 13:34:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:08.121 1+0 records in 00:23:08.121 1+0 records out 00:23:08.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357435 s, 11.5 MB/s 00:23:08.121 13:34:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:08.121 13:34:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:23:08.121 13:34:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:08.121 13:34:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:08.121 13:34:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:23:08.121 13:34:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:08.121 13:34:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:08.121 13:34:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:23:08.380 /dev/nbd1 00:23:08.380 13:34:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:08.380 13:34:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:08.380 13:34:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:08.380 13:34:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:23:08.380 13:34:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:08.380 13:34:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:08.380 13:34:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:08.380 13:34:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:23:08.380 13:34:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:08.380 13:34:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:08.380 13:34:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:23:08.380 1+0 records in 00:23:08.380 1+0 records out 00:23:08.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234824 s, 17.4 MB/s 00:23:08.380 13:34:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:08.380 13:34:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:23:08.380 13:34:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:23:08.380 13:34:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:08.380 13:34:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:23:08.380 13:34:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:08.380 13:34:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:08.380 13:34:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:08.380 13:34:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:08.380 13:34:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:08.639 { 00:23:08.639 "nbd_device": "/dev/nbd0", 00:23:08.639 "bdev_name": "Malloc0" 00:23:08.639 }, 00:23:08.639 { 00:23:08.639 "nbd_device": "/dev/nbd1", 00:23:08.639 "bdev_name": "Malloc1" 00:23:08.639 } 00:23:08.639 ]' 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:08.639 { 00:23:08.639 "nbd_device": "/dev/nbd0", 00:23:08.639 "bdev_name": "Malloc0" 00:23:08.639 }, 00:23:08.639 { 00:23:08.639 "nbd_device": "/dev/nbd1", 00:23:08.639 "bdev_name": "Malloc1" 00:23:08.639 } 00:23:08.639 ]' 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:08.639 /dev/nbd1' 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:08.639 /dev/nbd1' 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:23:08.639 256+0 records in 00:23:08.639 256+0 records out 00:23:08.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00857307 s, 122 MB/s 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:08.639 256+0 records in 00:23:08.639 256+0 records out 00:23:08.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287201 s, 36.5 MB/s 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:08.639 256+0 records in 00:23:08.639 256+0 records out 00:23:08.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030406 s, 34.5 MB/s 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.639 13:34:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:09.207 13:34:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:09.207 13:34:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:09.207 13:34:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:09.207 13:34:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:09.207 13:34:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:09.207 13:34:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:09.207 13:34:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:09.207 13:34:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:09.207 13:34:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:09.207 13:34:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:09.466 13:34:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:09.466 13:34:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:09.466 13:34:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:09.466 13:34:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:09.466 13:34:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:09.466 13:34:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:09.466 13:34:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:23:09.466 13:34:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:23:09.466 13:34:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:09.466 13:34:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:09.466 13:34:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:09.724 13:34:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:09.724 13:34:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:09.724 13:34:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:09.724 13:34:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:09.724 13:34:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:23:09.724 13:34:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:09.724 13:34:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:23:09.724 13:34:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:23:09.724 13:34:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:23:09.724 13:34:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:23:09.724 13:34:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:09.724 13:34:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:23:09.724 13:34:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:23:10.306 13:34:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:23:10.306 [2024-10-28 13:34:24.403913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:10.306 [2024-10-28 13:34:24.455108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.306 [2024-10-28 13:34:24.455126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.564 [2024-10-28 13:34:24.516550] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:23:10.564 [2024-10-28 13:34:24.516673] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:23:13.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:13.096 13:34:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71763 /var/tmp/spdk-nbd.sock 00:23:13.096 13:34:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71763 ']' 00:23:13.096 13:34:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:13.096 13:34:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:13.096 13:34:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:13.096 13:34:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:13.096 13:34:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:13.663 13:34:27 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:13.663 13:34:27 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:23:13.663 13:34:27 event.app_repeat -- event/event.sh@39 -- # killprocess 71763 00:23:13.663 13:34:27 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71763 ']' 00:23:13.663 13:34:27 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71763 00:23:13.663 13:34:27 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:23:13.663 13:34:27 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:13.663 13:34:27 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71763 00:23:13.663 killing process with pid 71763 00:23:13.663 13:34:27 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:13.663 13:34:27 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:13.663 13:34:27 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71763' 00:23:13.663 13:34:27 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71763 00:23:13.663 13:34:27 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71763 00:23:13.922 spdk_app_start is called in Round 0. 00:23:13.922 Shutdown signal received, stop current app iteration 00:23:13.922 Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 reinitialization... 00:23:13.922 spdk_app_start is called in Round 1. 00:23:13.922 Shutdown signal received, stop current app iteration 00:23:13.922 Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 reinitialization... 00:23:13.922 spdk_app_start is called in Round 2. 00:23:13.922 Shutdown signal received, stop current app iteration 00:23:13.922 Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 reinitialization... 00:23:13.922 spdk_app_start is called in Round 3. 00:23:13.922 Shutdown signal received, stop current app iteration 00:23:13.922 ************************************ 00:23:13.922 END TEST app_repeat 00:23:13.922 ************************************ 00:23:13.922 13:34:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:23:13.922 13:34:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:23:13.922 00:23:13.922 real 0m21.104s 00:23:13.922 user 0m48.655s 00:23:13.922 sys 0m3.120s 00:23:13.922 13:34:27 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:13.922 13:34:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:23:13.922 13:34:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:23:13.922 13:34:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:23:13.922 13:34:27 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:13.922 13:34:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:13.922 13:34:27 event -- common/autotest_common.sh@10 -- # set +x 00:23:13.922 ************************************ 00:23:13.922 START TEST cpu_locks 00:23:13.922 ************************************ 00:23:13.922 13:34:27 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:23:13.922 * Looking for test storage... 00:23:13.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:23:13.922 13:34:28 event.cpu_locks -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:23:13.922 13:34:28 event.cpu_locks -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:23:13.922 13:34:28 event.cpu_locks -- common/autotest_common.sh@1689 -- # lcov --version 00:23:14.181 13:34:28 event.cpu_locks -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:23:14.181 13:34:28 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.181 13:34:28 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.181 13:34:28 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.181 13:34:28 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.181 13:34:28 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.181 13:34:28 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.181 13:34:28 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.181 13:34:28 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.181 13:34:28 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.182 13:34:28 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:23:14.182 13:34:28 event.cpu_locks -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.182 13:34:28 event.cpu_locks -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:23:14.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.182 --rc genhtml_branch_coverage=1 00:23:14.182 --rc genhtml_function_coverage=1 00:23:14.182 --rc genhtml_legend=1 00:23:14.182 --rc geninfo_all_blocks=1 00:23:14.182 --rc geninfo_unexecuted_blocks=1 00:23:14.182 00:23:14.182 ' 00:23:14.182 13:34:28 event.cpu_locks -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:23:14.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.182 --rc genhtml_branch_coverage=1 00:23:14.182 --rc genhtml_function_coverage=1 00:23:14.182 --rc genhtml_legend=1 00:23:14.182 --rc geninfo_all_blocks=1 00:23:14.182 --rc geninfo_unexecuted_blocks=1 00:23:14.182 00:23:14.182 ' 00:23:14.182 13:34:28 event.cpu_locks -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:23:14.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.182 --rc genhtml_branch_coverage=1 00:23:14.182 --rc genhtml_function_coverage=1 00:23:14.182 --rc genhtml_legend=1 00:23:14.182 --rc geninfo_all_blocks=1 00:23:14.182 --rc geninfo_unexecuted_blocks=1 00:23:14.182 00:23:14.182 ' 00:23:14.182 13:34:28 event.cpu_locks -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:23:14.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.182 --rc genhtml_branch_coverage=1 00:23:14.182 --rc genhtml_function_coverage=1 00:23:14.182 --rc genhtml_legend=1 00:23:14.182 --rc geninfo_all_blocks=1 00:23:14.182 --rc geninfo_unexecuted_blocks=1 00:23:14.182 00:23:14.182 ' 00:23:14.182 13:34:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:23:14.182 13:34:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:23:14.182 13:34:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:23:14.182 13:34:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:23:14.182 13:34:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:14.182 13:34:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:14.182 13:34:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:14.182 ************************************ 00:23:14.182 START TEST default_locks 00:23:14.182 ************************************ 00:23:14.182 13:34:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:23:14.182 13:34:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72229 00:23:14.182 13:34:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:14.182 13:34:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72229 00:23:14.182 13:34:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 72229 ']' 00:23:14.182 13:34:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.182 13:34:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:14.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.182 13:34:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.182 13:34:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:14.182 13:34:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:23:14.182 [2024-10-28 13:34:28.276005] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:14.182 [2024-10-28 13:34:28.276244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72229 ] 00:23:14.441 [2024-10-28 13:34:28.429600] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:14.441 [2024-10-28 13:34:28.461062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.441 [2024-10-28 13:34:28.512643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.375 13:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:15.375 13:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:23:15.375 13:34:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72229 00:23:15.375 13:34:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:15.375 13:34:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72229 00:23:15.632 13:34:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72229 00:23:15.632 13:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 72229 ']' 00:23:15.632 13:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 72229 00:23:15.632 13:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:23:15.632 13:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:15.632 13:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72229 00:23:15.632 13:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:15.632 13:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:15.632 killing process with pid 72229 00:23:15.632 13:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72229' 00:23:15.632 13:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 72229 00:23:15.632 13:34:29 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 72229 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72229 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72229 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 72229 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 72229 ']' 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:23:16.197 ERROR: process (pid: 72229) is no longer running 00:23:16.197 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72229) - No such process 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:23:16.197 00:23:16.197 real 0m2.038s 00:23:16.197 user 0m2.108s 00:23:16.197 sys 0m0.710s 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:16.197 ************************************ 00:23:16.197 END TEST default_locks 00:23:16.197 ************************************ 00:23:16.197 13:34:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:23:16.197 13:34:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:23:16.197 13:34:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:16.197 13:34:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:16.197 13:34:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:16.197 ************************************ 00:23:16.197 START TEST default_locks_via_rpc 00:23:16.197 ************************************ 00:23:16.197 13:34:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:23:16.197 13:34:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72282 00:23:16.197 13:34:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72282 00:23:16.197 13:34:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:16.197 13:34:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72282 ']' 00:23:16.197 13:34:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.197 13:34:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.197 13:34:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.197 13:34:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.197 13:34:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:16.457 [2024-10-28 13:34:30.407830] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:16.457 [2024-10-28 13:34:30.408052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72282 ] 00:23:16.457 [2024-10-28 13:34:30.562367] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:16.457 [2024-10-28 13:34:30.592006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.715 [2024-10-28 13:34:30.649539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72282 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72282 00:23:17.279 13:34:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:17.846 13:34:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72282 00:23:17.846 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 72282 ']' 00:23:17.846 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 72282 00:23:17.846 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:23:17.846 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:17.846 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72282 00:23:17.846 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:17.846 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:17.846 killing process with pid 72282 00:23:17.846 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72282' 00:23:17.846 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 72282 00:23:17.846 13:34:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 72282 00:23:18.414 00:23:18.414 real 0m2.024s 00:23:18.414 user 0m2.096s 00:23:18.414 sys 0m0.707s 00:23:18.414 13:34:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:18.414 13:34:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:18.414 ************************************ 00:23:18.414 END TEST default_locks_via_rpc 00:23:18.414 ************************************ 00:23:18.414 13:34:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:23:18.414 13:34:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:18.414 13:34:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:18.414 13:34:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:18.414 ************************************ 00:23:18.414 START TEST non_locking_app_on_locked_coremask 00:23:18.414 ************************************ 00:23:18.414 13:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:23:18.414 13:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72334 00:23:18.414 13:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72334 /var/tmp/spdk.sock 00:23:18.414 13:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:18.414 13:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72334 ']' 00:23:18.414 13:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.414 13:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.414 13:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.414 13:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.414 13:34:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:18.414 [2024-10-28 13:34:32.424650] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:18.414 [2024-10-28 13:34:32.424857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72334 ] 00:23:18.414 [2024-10-28 13:34:32.568924] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:18.673 [2024-10-28 13:34:32.597152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.673 [2024-10-28 13:34:32.652813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.608 13:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.608 13:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:23:19.608 13:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72350 00:23:19.608 13:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:23:19.608 13:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72350 /var/tmp/spdk2.sock 00:23:19.608 13:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72350 ']' 00:23:19.608 13:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:19.609 13:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:19.609 13:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:19.609 13:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.609 13:34:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:19.609 [2024-10-28 13:34:33.511671] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:19.609 [2024-10-28 13:34:33.511841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72350 ] 00:23:19.609 [2024-10-28 13:34:33.659993] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:19.609 [2024-10-28 13:34:33.707514] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:19.609 [2024-10-28 13:34:33.707583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.867 [2024-10-28 13:34:33.824607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.434 13:34:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.434 13:34:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:23:20.434 13:34:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72334 00:23:20.434 13:34:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72334 00:23:20.434 13:34:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:21.369 13:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72334 00:23:21.369 13:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72334 ']' 00:23:21.369 13:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72334 00:23:21.369 13:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:23:21.369 13:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:21.369 13:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72334 00:23:21.369 13:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:21.369 13:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:21.369 killing process with pid 72334 00:23:21.369 13:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72334' 00:23:21.369 13:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72334 00:23:21.369 13:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72334 00:23:22.305 13:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72350 00:23:22.305 13:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72350 ']' 00:23:22.305 13:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72350 00:23:22.305 13:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:23:22.305 13:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.305 13:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72350 00:23:22.305 13:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:22.305 killing process with pid 72350 00:23:22.305 13:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:22.305 13:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72350' 00:23:22.305 13:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72350 00:23:22.305 13:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72350 00:23:22.873 00:23:22.873 real 0m4.516s 00:23:22.873 user 0m5.050s 00:23:22.873 sys 0m1.330s 00:23:22.873 13:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.873 13:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:22.873 ************************************ 00:23:22.873 END TEST non_locking_app_on_locked_coremask 00:23:22.873 ************************************ 00:23:22.873 13:34:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:23:22.873 13:34:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:22.873 13:34:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:22.873 13:34:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:22.873 ************************************ 00:23:22.873 START TEST locking_app_on_unlocked_coremask 00:23:22.873 ************************************ 00:23:22.873 13:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:23:22.873 13:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72424 00:23:22.873 13:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72424 /var/tmp/spdk.sock 00:23:22.873 13:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72424 ']' 00:23:22.873 13:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.873 13:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:23:22.873 13:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.873 13:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.873 13:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.873 13:34:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:22.873 [2024-10-28 13:34:37.003644] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:22.873 [2024-10-28 13:34:37.003861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72424 ] 00:23:23.132 [2024-10-28 13:34:37.159315] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:23.132 [2024-10-28 13:34:37.195239] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:23.132 [2024-10-28 13:34:37.195336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.132 [2024-10-28 13:34:37.256644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.068 13:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:24.068 13:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:23:24.068 13:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72440 00:23:24.068 13:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:23:24.068 13:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72440 /var/tmp/spdk2.sock 00:23:24.068 13:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72440 ']' 00:23:24.068 13:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:24.068 13:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:24.068 13:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:24.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:24.068 13:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:24.068 13:34:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:24.068 [2024-10-28 13:34:38.160000] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:24.068 [2024-10-28 13:34:38.160265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72440 ] 00:23:24.326 [2024-10-28 13:34:38.313967] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:24.326 [2024-10-28 13:34:38.362960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.326 [2024-10-28 13:34:38.473755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.262 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:25.262 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:23:25.262 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72440 00:23:25.262 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72440 00:23:25.262 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:25.829 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72424 00:23:25.829 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72424 ']' 00:23:25.829 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 72424 00:23:25.829 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:23:25.829 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:25.829 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72424 00:23:25.829 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:25.829 killing process with pid 72424 00:23:25.829 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:25.829 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72424' 00:23:25.829 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 72424 00:23:25.829 13:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 72424 00:23:26.764 13:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72440 00:23:26.764 13:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72440 ']' 00:23:26.764 13:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 72440 00:23:26.765 13:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:23:26.765 13:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.765 13:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72440 00:23:26.765 13:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:26.765 killing process with pid 72440 00:23:26.765 13:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:26.765 13:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72440' 00:23:26.765 13:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 72440 00:23:26.765 13:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 72440 00:23:27.392 00:23:27.392 real 0m4.413s 00:23:27.392 user 0m4.926s 00:23:27.392 sys 0m1.348s 00:23:27.392 ************************************ 00:23:27.392 END TEST locking_app_on_unlocked_coremask 00:23:27.392 ************************************ 00:23:27.392 13:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:27.392 13:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:27.392 13:34:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:23:27.392 13:34:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:27.392 13:34:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:27.392 13:34:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:27.392 ************************************ 00:23:27.392 START TEST locking_app_on_locked_coremask 00:23:27.392 ************************************ 00:23:27.392 13:34:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:23:27.392 13:34:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72509 00:23:27.392 13:34:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72509 /var/tmp/spdk.sock 00:23:27.392 13:34:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:27.392 13:34:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72509 ']' 00:23:27.392 13:34:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.392 13:34:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:27.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.392 13:34:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.392 13:34:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:27.392 13:34:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:27.392 [2024-10-28 13:34:41.472599] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:27.392 [2024-10-28 13:34:41.472808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72509 ] 00:23:27.654 [2024-10-28 13:34:41.627699] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:27.654 [2024-10-28 13:34:41.661969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.654 [2024-10-28 13:34:41.719272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72530 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72530 /var/tmp/spdk2.sock 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72530 /var/tmp/spdk2.sock 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72530 /var/tmp/spdk2.sock 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72530 ']' 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.589 13:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:28.589 [2024-10-28 13:34:42.599396] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:28.589 [2024-10-28 13:34:42.599599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72530 ] 00:23:28.846 [2024-10-28 13:34:42.755999] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:28.846 [2024-10-28 13:34:42.803255] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72509 has claimed it. 00:23:28.846 [2024-10-28 13:34:42.803330] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:23:29.105 ERROR: process (pid: 72530) is no longer running 00:23:29.105 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72530) - No such process 00:23:29.105 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:29.105 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:23:29.105 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:23:29.105 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:29.105 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:29.105 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:29.105 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72509 00:23:29.105 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72509 00:23:29.105 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:23:29.683 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72509 00:23:29.683 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72509 ']' 00:23:29.683 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72509 00:23:29.683 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:23:29.683 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.683 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72509 00:23:29.683 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:29.683 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:29.683 killing process with pid 72509 00:23:29.683 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72509' 00:23:29.683 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72509 00:23:29.683 13:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72509 00:23:30.250 00:23:30.250 real 0m2.849s 00:23:30.250 user 0m3.278s 00:23:30.250 sys 0m0.845s 00:23:30.250 13:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:30.250 13:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:30.250 ************************************ 00:23:30.250 END TEST locking_app_on_locked_coremask 00:23:30.250 ************************************ 00:23:30.250 13:34:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:23:30.250 13:34:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:30.250 13:34:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:30.250 13:34:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:30.250 ************************************ 00:23:30.250 START TEST locking_overlapped_coremask 00:23:30.250 ************************************ 00:23:30.250 13:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:23:30.250 13:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72578 00:23:30.250 13:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72578 /var/tmp/spdk.sock 00:23:30.250 13:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:23:30.250 13:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72578 ']' 00:23:30.250 13:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.250 13:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.250 13:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.250 13:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.250 13:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:30.250 [2024-10-28 13:34:44.355598] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:30.251 [2024-10-28 13:34:44.355776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72578 ] 00:23:30.509 [2024-10-28 13:34:44.502923] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:30.509 [2024-10-28 13:34:44.529601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:30.509 [2024-10-28 13:34:44.588898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.509 [2024-10-28 13:34:44.588971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.509 [2024-10-28 13:34:44.589051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72596 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72596 /var/tmp/spdk2.sock 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72596 /var/tmp/spdk2.sock 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72596 /var/tmp/spdk2.sock 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72596 ']' 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:31.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.443 13:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:31.443 [2024-10-28 13:34:45.437911] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:31.444 [2024-10-28 13:34:45.438081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72596 ] 00:23:31.444 [2024-10-28 13:34:45.585046] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:31.702 [2024-10-28 13:34:45.637780] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72578 has claimed it. 00:23:31.702 [2024-10-28 13:34:45.637887] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:23:32.267 ERROR: process (pid: 72596) is no longer running 00:23:32.267 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72596) - No such process 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72578 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 72578 ']' 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 72578 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72578 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:32.268 killing process with pid 72578 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72578' 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 72578 00:23:32.268 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 72578 00:23:32.526 00:23:32.526 real 0m2.387s 00:23:32.526 user 0m6.616s 00:23:32.526 sys 0m0.583s 00:23:32.526 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:32.526 ************************************ 00:23:32.526 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:23:32.526 END TEST locking_overlapped_coremask 00:23:32.526 ************************************ 00:23:32.526 13:34:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:23:32.526 13:34:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:32.526 13:34:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:32.526 13:34:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:32.785 ************************************ 00:23:32.785 START TEST locking_overlapped_coremask_via_rpc 00:23:32.785 ************************************ 00:23:32.785 13:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:23:32.785 13:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72644 00:23:32.785 13:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72644 /var/tmp/spdk.sock 00:23:32.785 13:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:23:32.785 13:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72644 ']' 00:23:32.785 13:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.785 13:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.785 13:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.785 13:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.785 13:34:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:32.785 [2024-10-28 13:34:46.792244] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:32.785 [2024-10-28 13:34:46.792474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72644 ] 00:23:32.785 [2024-10-28 13:34:46.941508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:33.043 [2024-10-28 13:34:46.968647] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:33.043 [2024-10-28 13:34:46.968697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:33.043 [2024-10-28 13:34:47.026110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.043 [2024-10-28 13:34:47.026203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.043 [2024-10-28 13:34:47.026249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.976 13:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:33.976 13:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:23:33.976 13:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72662 00:23:33.976 13:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72662 /var/tmp/spdk2.sock 00:23:33.976 13:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:23:33.976 13:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72662 ']' 00:23:33.976 13:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:33.976 13:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.976 13:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:33.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:33.976 13:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.976 13:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:33.976 [2024-10-28 13:34:47.961804] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:33.976 [2024-10-28 13:34:47.962009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72662 ] 00:23:33.976 [2024-10-28 13:34:48.118932] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:34.233 [2024-10-28 13:34:48.166523] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:34.233 [2024-10-28 13:34:48.166598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:34.233 [2024-10-28 13:34:48.281007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.233 [2024-10-28 13:34:48.284424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.233 [2024-10-28 13:34:48.284520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:35.168 [2024-10-28 13:34:49.032391] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72644 has claimed it. 00:23:35.168 request: 00:23:35.168 { 00:23:35.168 "method": "framework_enable_cpumask_locks", 00:23:35.168 "req_id": 1 00:23:35.168 } 00:23:35.168 Got JSON-RPC error response 00:23:35.168 response: 00:23:35.168 { 00:23:35.168 "code": -32603, 00:23:35.168 "message": "Failed to claim CPU core: 2" 00:23:35.168 } 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72644 /var/tmp/spdk.sock 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72644 ']' 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:35.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:35.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72662 /var/tmp/spdk2.sock 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72662 ']' 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:35.168 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:35.425 ************************************ 00:23:35.425 END TEST locking_overlapped_coremask_via_rpc 00:23:35.425 ************************************ 00:23:35.425 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:35.425 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:23:35.425 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:23:35.425 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:23:35.425 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:23:35.425 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:23:35.425 00:23:35.425 real 0m2.875s 00:23:35.425 user 0m1.600s 00:23:35.425 sys 0m0.201s 00:23:35.425 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:35.425 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:35.687 13:34:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:23:35.687 13:34:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72644 ]] 00:23:35.687 13:34:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72644 00:23:35.687 13:34:49 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72644 ']' 00:23:35.687 13:34:49 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72644 00:23:35.687 13:34:49 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:23:35.687 13:34:49 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:35.687 13:34:49 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72644 00:23:35.687 killing process with pid 72644 00:23:35.687 13:34:49 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:35.687 13:34:49 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:35.687 13:34:49 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72644' 00:23:35.687 13:34:49 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72644 00:23:35.687 13:34:49 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72644 00:23:35.945 13:34:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72662 ]] 00:23:35.945 13:34:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72662 00:23:35.945 13:34:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72662 ']' 00:23:35.945 13:34:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72662 00:23:35.945 13:34:50 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:23:35.945 13:34:50 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:35.945 13:34:50 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72662 00:23:36.203 killing process with pid 72662 00:23:36.203 13:34:50 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:36.203 13:34:50 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:36.203 13:34:50 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72662' 00:23:36.203 13:34:50 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72662 00:23:36.203 13:34:50 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72662 00:23:36.464 13:34:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:23:36.464 13:34:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:23:36.464 13:34:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72644 ]] 00:23:36.464 13:34:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72644 00:23:36.464 Process with pid 72644 is not found 00:23:36.464 Process with pid 72662 is not found 00:23:36.464 13:34:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72644 ']' 00:23:36.464 13:34:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72644 00:23:36.464 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72644) - No such process 00:23:36.464 13:34:50 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72644 is not found' 00:23:36.464 13:34:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72662 ]] 00:23:36.464 13:34:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72662 00:23:36.464 13:34:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72662 ']' 00:23:36.464 13:34:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72662 00:23:36.464 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72662) - No such process 00:23:36.464 13:34:50 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72662 is not found' 00:23:36.464 13:34:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:23:36.464 00:23:36.464 real 0m22.594s 00:23:36.464 user 0m39.288s 00:23:36.464 sys 0m6.815s 00:23:36.464 13:34:50 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:36.464 13:34:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:36.464 ************************************ 00:23:36.464 END TEST cpu_locks 00:23:36.464 ************************************ 00:23:36.464 ************************************ 00:23:36.464 END TEST event 00:23:36.464 ************************************ 00:23:36.464 00:23:36.464 real 0m53.008s 00:23:36.464 user 1m43.303s 00:23:36.464 sys 0m11.007s 00:23:36.464 13:34:50 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:36.464 13:34:50 event -- common/autotest_common.sh@10 -- # set +x 00:23:36.726 13:34:50 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:23:36.726 13:34:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:36.726 13:34:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:36.726 13:34:50 -- common/autotest_common.sh@10 -- # set +x 00:23:36.726 ************************************ 00:23:36.726 START TEST thread 00:23:36.726 ************************************ 00:23:36.726 13:34:50 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:23:36.726 * Looking for test storage... 00:23:36.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:23:36.726 13:34:50 thread -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:23:36.726 13:34:50 thread -- common/autotest_common.sh@1689 -- # lcov --version 00:23:36.726 13:34:50 thread -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:23:36.726 13:34:50 thread -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:23:36.726 13:34:50 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.726 13:34:50 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.726 13:34:50 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.726 13:34:50 thread -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.726 13:34:50 thread -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.726 13:34:50 thread -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.726 13:34:50 thread -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.726 13:34:50 thread -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.726 13:34:50 thread -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.726 13:34:50 thread -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.726 13:34:50 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.726 13:34:50 thread -- scripts/common.sh@344 -- # case "$op" in 00:23:36.726 13:34:50 thread -- scripts/common.sh@345 -- # : 1 00:23:36.726 13:34:50 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.726 13:34:50 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.726 13:34:50 thread -- scripts/common.sh@365 -- # decimal 1 00:23:36.726 13:34:50 thread -- scripts/common.sh@353 -- # local d=1 00:23:36.726 13:34:50 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.726 13:34:50 thread -- scripts/common.sh@355 -- # echo 1 00:23:36.726 13:34:50 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.726 13:34:50 thread -- scripts/common.sh@366 -- # decimal 2 00:23:36.726 13:34:50 thread -- scripts/common.sh@353 -- # local d=2 00:23:36.726 13:34:50 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.727 13:34:50 thread -- scripts/common.sh@355 -- # echo 2 00:23:36.727 13:34:50 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.727 13:34:50 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.727 13:34:50 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.727 13:34:50 thread -- scripts/common.sh@368 -- # return 0 00:23:36.727 13:34:50 thread -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.727 13:34:50 thread -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:23:36.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.727 --rc genhtml_branch_coverage=1 00:23:36.727 --rc genhtml_function_coverage=1 00:23:36.727 --rc genhtml_legend=1 00:23:36.727 --rc geninfo_all_blocks=1 00:23:36.727 --rc geninfo_unexecuted_blocks=1 00:23:36.727 00:23:36.727 ' 00:23:36.727 13:34:50 thread -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:23:36.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.727 --rc genhtml_branch_coverage=1 00:23:36.727 --rc genhtml_function_coverage=1 00:23:36.727 --rc genhtml_legend=1 00:23:36.727 --rc geninfo_all_blocks=1 00:23:36.727 --rc geninfo_unexecuted_blocks=1 00:23:36.727 00:23:36.727 ' 00:23:36.727 13:34:50 thread -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:23:36.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.727 --rc genhtml_branch_coverage=1 00:23:36.727 --rc genhtml_function_coverage=1 00:23:36.727 --rc genhtml_legend=1 00:23:36.727 --rc geninfo_all_blocks=1 00:23:36.727 --rc geninfo_unexecuted_blocks=1 00:23:36.727 00:23:36.727 ' 00:23:36.727 13:34:50 thread -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:23:36.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.727 --rc genhtml_branch_coverage=1 00:23:36.727 --rc genhtml_function_coverage=1 00:23:36.727 --rc genhtml_legend=1 00:23:36.727 --rc geninfo_all_blocks=1 00:23:36.727 --rc geninfo_unexecuted_blocks=1 00:23:36.727 00:23:36.727 ' 00:23:36.727 13:34:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:23:36.727 13:34:50 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:23:36.727 13:34:50 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:36.727 13:34:50 thread -- common/autotest_common.sh@10 -- # set +x 00:23:36.727 ************************************ 00:23:36.727 START TEST thread_poller_perf 00:23:36.727 ************************************ 00:23:36.727 13:34:50 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:23:36.986 [2024-10-28 13:34:50.892816] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:36.986 [2024-10-28 13:34:50.893233] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72800 ] 00:23:36.986 [2024-10-28 13:34:51.047796] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:36.986 [2024-10-28 13:34:51.072900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.986 [2024-10-28 13:34:51.126539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.986 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:23:38.360 [2024-10-28T13:34:52.520Z] ====================================== 00:23:38.360 [2024-10-28T13:34:52.520Z] busy:2212642445 (cyc) 00:23:38.360 [2024-10-28T13:34:52.520Z] total_run_count: 299000 00:23:38.360 [2024-10-28T13:34:52.520Z] tsc_hz: 2200000000 (cyc) 00:23:38.360 [2024-10-28T13:34:52.520Z] ====================================== 00:23:38.360 [2024-10-28T13:34:52.520Z] poller_cost: 7400 (cyc), 3363 (nsec) 00:23:38.360 00:23:38.360 ************************************ 00:23:38.360 END TEST thread_poller_perf 00:23:38.360 ************************************ 00:23:38.360 real 0m1.353s 00:23:38.360 user 0m1.147s 00:23:38.360 sys 0m0.095s 00:23:38.360 13:34:52 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:38.360 13:34:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:23:38.360 13:34:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:23:38.360 13:34:52 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:23:38.360 13:34:52 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:38.360 13:34:52 thread -- common/autotest_common.sh@10 -- # set +x 00:23:38.360 ************************************ 00:23:38.360 START TEST thread_poller_perf 00:23:38.360 ************************************ 00:23:38.360 13:34:52 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:23:38.360 [2024-10-28 13:34:52.298055] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:38.360 [2024-10-28 13:34:52.298350] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72831 ] 00:23:38.360 [2024-10-28 13:34:52.450503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:38.360 [2024-10-28 13:34:52.479416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.618 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:23:38.618 [2024-10-28 13:34:52.559221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.553 [2024-10-28T13:34:53.713Z] ====================================== 00:23:39.553 [2024-10-28T13:34:53.713Z] busy:2209214590 (cyc) 00:23:39.553 [2024-10-28T13:34:53.713Z] total_run_count: 3040000 00:23:39.553 [2024-10-28T13:34:53.713Z] tsc_hz: 2200000000 (cyc) 00:23:39.553 [2024-10-28T13:34:53.713Z] ====================================== 00:23:39.553 [2024-10-28T13:34:53.713Z] poller_cost: 726 (cyc), 330 (nsec) 00:23:39.553 00:23:39.553 real 0m1.380s 00:23:39.553 user 0m1.169s 00:23:39.553 sys 0m0.095s 00:23:39.553 ************************************ 00:23:39.553 END TEST thread_poller_perf 00:23:39.553 ************************************ 00:23:39.553 13:34:53 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:39.553 13:34:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:23:39.553 13:34:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:23:39.553 ************************************ 00:23:39.553 END TEST thread 00:23:39.553 ************************************ 00:23:39.553 00:23:39.553 real 0m3.025s 00:23:39.553 user 0m2.461s 00:23:39.553 sys 0m0.338s 00:23:39.553 13:34:53 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:39.553 13:34:53 thread -- common/autotest_common.sh@10 -- # set +x 00:23:39.811 13:34:53 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:23:39.811 13:34:53 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:23:39.811 13:34:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:39.811 13:34:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:39.811 13:34:53 -- common/autotest_common.sh@10 -- # set +x 00:23:39.811 ************************************ 00:23:39.811 START TEST app_cmdline 00:23:39.811 ************************************ 00:23:39.811 13:34:53 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:23:39.811 * Looking for test storage... 00:23:39.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:23:39.811 13:34:53 app_cmdline -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:23:39.811 13:34:53 app_cmdline -- common/autotest_common.sh@1689 -- # lcov --version 00:23:39.811 13:34:53 app_cmdline -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:23:39.811 13:34:53 app_cmdline -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@345 -- # : 1 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.811 13:34:53 app_cmdline -- scripts/common.sh@368 -- # return 0 00:23:39.811 13:34:53 app_cmdline -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.811 13:34:53 app_cmdline -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:23:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.811 --rc genhtml_branch_coverage=1 00:23:39.811 --rc genhtml_function_coverage=1 00:23:39.811 --rc genhtml_legend=1 00:23:39.811 --rc geninfo_all_blocks=1 00:23:39.811 --rc geninfo_unexecuted_blocks=1 00:23:39.811 00:23:39.811 ' 00:23:39.811 13:34:53 app_cmdline -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:23:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.811 --rc genhtml_branch_coverage=1 00:23:39.811 --rc genhtml_function_coverage=1 00:23:39.811 --rc genhtml_legend=1 00:23:39.811 --rc geninfo_all_blocks=1 00:23:39.811 --rc geninfo_unexecuted_blocks=1 00:23:39.811 00:23:39.811 ' 00:23:39.811 13:34:53 app_cmdline -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:23:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.811 --rc genhtml_branch_coverage=1 00:23:39.811 --rc genhtml_function_coverage=1 00:23:39.811 --rc genhtml_legend=1 00:23:39.811 --rc geninfo_all_blocks=1 00:23:39.811 --rc geninfo_unexecuted_blocks=1 00:23:39.811 00:23:39.811 ' 00:23:39.811 13:34:53 app_cmdline -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:23:39.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.811 --rc genhtml_branch_coverage=1 00:23:39.811 --rc genhtml_function_coverage=1 00:23:39.811 --rc genhtml_legend=1 00:23:39.811 --rc geninfo_all_blocks=1 00:23:39.811 --rc geninfo_unexecuted_blocks=1 00:23:39.811 00:23:39.811 ' 00:23:39.811 13:34:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:23:39.811 13:34:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72920 00:23:39.811 13:34:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72920 00:23:39.811 13:34:53 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:23:39.811 13:34:53 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72920 ']' 00:23:39.811 13:34:53 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.812 13:34:53 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:39.812 13:34:53 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.812 13:34:53 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:39.812 13:34:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:23:40.085 [2024-10-28 13:34:54.099341] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:40.085 [2024-10-28 13:34:54.099609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72920 ] 00:23:40.357 [2024-10-28 13:34:54.255730] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:40.357 [2024-10-28 13:34:54.290567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.357 [2024-10-28 13:34:54.356837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.291 13:34:55 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.291 13:34:55 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:23:41.291 13:34:55 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:23:41.291 { 00:23:41.291 "version": "SPDK v25.01-pre git sha1 83ba90867", 00:23:41.291 "fields": { 00:23:41.291 "major": 25, 00:23:41.291 "minor": 1, 00:23:41.291 "patch": 0, 00:23:41.291 "suffix": "-pre", 00:23:41.291 "commit": "83ba90867" 00:23:41.291 } 00:23:41.291 } 00:23:41.291 13:34:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:23:41.291 13:34:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:23:41.291 13:34:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:23:41.291 13:34:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:23:41.291 13:34:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:23:41.291 13:34:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:23:41.291 13:34:55 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.291 13:34:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:23:41.291 13:34:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:23:41.291 13:34:55 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.550 13:34:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:23:41.550 13:34:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:23:41.550 13:34:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:23:41.550 13:34:55 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:23:41.550 13:34:55 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:23:41.550 13:34:55 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:41.550 13:34:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.550 13:34:55 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:41.550 13:34:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.550 13:34:55 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:41.550 13:34:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.550 13:34:55 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:41.550 13:34:55 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:41.550 13:34:55 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:23:41.808 request: 00:23:41.808 { 00:23:41.808 "method": "env_dpdk_get_mem_stats", 00:23:41.808 "req_id": 1 00:23:41.808 } 00:23:41.808 Got JSON-RPC error response 00:23:41.808 response: 00:23:41.808 { 00:23:41.808 "code": -32601, 00:23:41.808 "message": "Method not found" 00:23:41.808 } 00:23:41.808 13:34:55 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:23:41.808 13:34:55 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:41.808 13:34:55 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:41.808 13:34:55 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:41.808 13:34:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72920 00:23:41.808 13:34:55 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72920 ']' 00:23:41.808 13:34:55 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72920 00:23:41.808 13:34:55 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:23:41.808 13:34:55 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:41.808 13:34:55 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72920 00:23:41.808 killing process with pid 72920 00:23:41.808 13:34:55 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:41.808 13:34:55 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:41.808 13:34:55 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72920' 00:23:41.808 13:34:55 app_cmdline -- common/autotest_common.sh@969 -- # kill 72920 00:23:41.808 13:34:55 app_cmdline -- common/autotest_common.sh@974 -- # wait 72920 00:23:42.066 00:23:42.066 real 0m2.489s 00:23:42.066 user 0m3.044s 00:23:42.066 sys 0m0.636s 00:23:42.066 13:34:56 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:42.066 ************************************ 00:23:42.066 END TEST app_cmdline 00:23:42.066 ************************************ 00:23:42.066 13:34:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:23:42.325 13:34:56 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:23:42.325 13:34:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:42.325 13:34:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:42.325 13:34:56 -- common/autotest_common.sh@10 -- # set +x 00:23:42.325 ************************************ 00:23:42.325 START TEST version 00:23:42.325 ************************************ 00:23:42.325 13:34:56 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:23:42.325 * Looking for test storage... 00:23:42.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:23:42.325 13:34:56 version -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:23:42.325 13:34:56 version -- common/autotest_common.sh@1689 -- # lcov --version 00:23:42.325 13:34:56 version -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:23:42.325 13:34:56 version -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:23:42.325 13:34:56 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.325 13:34:56 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.325 13:34:56 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.325 13:34:56 version -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.325 13:34:56 version -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.325 13:34:56 version -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.325 13:34:56 version -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.325 13:34:56 version -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.325 13:34:56 version -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.325 13:34:56 version -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.325 13:34:56 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.325 13:34:56 version -- scripts/common.sh@344 -- # case "$op" in 00:23:42.325 13:34:56 version -- scripts/common.sh@345 -- # : 1 00:23:42.325 13:34:56 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.325 13:34:56 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.325 13:34:56 version -- scripts/common.sh@365 -- # decimal 1 00:23:42.325 13:34:56 version -- scripts/common.sh@353 -- # local d=1 00:23:42.325 13:34:56 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.325 13:34:56 version -- scripts/common.sh@355 -- # echo 1 00:23:42.325 13:34:56 version -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.325 13:34:56 version -- scripts/common.sh@366 -- # decimal 2 00:23:42.325 13:34:56 version -- scripts/common.sh@353 -- # local d=2 00:23:42.325 13:34:56 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.325 13:34:56 version -- scripts/common.sh@355 -- # echo 2 00:23:42.325 13:34:56 version -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.326 13:34:56 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.326 13:34:56 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.326 13:34:56 version -- scripts/common.sh@368 -- # return 0 00:23:42.326 13:34:56 version -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.326 13:34:56 version -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:23:42.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.326 --rc genhtml_branch_coverage=1 00:23:42.326 --rc genhtml_function_coverage=1 00:23:42.326 --rc genhtml_legend=1 00:23:42.326 --rc geninfo_all_blocks=1 00:23:42.326 --rc geninfo_unexecuted_blocks=1 00:23:42.326 00:23:42.326 ' 00:23:42.326 13:34:56 version -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:23:42.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.326 --rc genhtml_branch_coverage=1 00:23:42.326 --rc genhtml_function_coverage=1 00:23:42.326 --rc genhtml_legend=1 00:23:42.326 --rc geninfo_all_blocks=1 00:23:42.326 --rc geninfo_unexecuted_blocks=1 00:23:42.326 00:23:42.326 ' 00:23:42.326 13:34:56 version -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:23:42.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.326 --rc genhtml_branch_coverage=1 00:23:42.326 --rc genhtml_function_coverage=1 00:23:42.326 --rc genhtml_legend=1 00:23:42.326 --rc geninfo_all_blocks=1 00:23:42.326 --rc geninfo_unexecuted_blocks=1 00:23:42.326 00:23:42.326 ' 00:23:42.326 13:34:56 version -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:23:42.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.326 --rc genhtml_branch_coverage=1 00:23:42.326 --rc genhtml_function_coverage=1 00:23:42.326 --rc genhtml_legend=1 00:23:42.326 --rc geninfo_all_blocks=1 00:23:42.326 --rc geninfo_unexecuted_blocks=1 00:23:42.326 00:23:42.326 ' 00:23:42.326 13:34:56 version -- app/version.sh@17 -- # get_header_version major 00:23:42.326 13:34:56 version -- app/version.sh@14 -- # cut -f2 00:23:42.326 13:34:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:23:42.326 13:34:56 version -- app/version.sh@14 -- # tr -d '"' 00:23:42.326 13:34:56 version -- app/version.sh@17 -- # major=25 00:23:42.326 13:34:56 version -- app/version.sh@18 -- # get_header_version minor 00:23:42.326 13:34:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:23:42.326 13:34:56 version -- app/version.sh@14 -- # cut -f2 00:23:42.326 13:34:56 version -- app/version.sh@14 -- # tr -d '"' 00:23:42.326 13:34:56 version -- app/version.sh@18 -- # minor=1 00:23:42.326 13:34:56 version -- app/version.sh@19 -- # get_header_version patch 00:23:42.326 13:34:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:23:42.584 13:34:56 version -- app/version.sh@14 -- # cut -f2 00:23:42.585 13:34:56 version -- app/version.sh@14 -- # tr -d '"' 00:23:42.585 13:34:56 version -- app/version.sh@19 -- # patch=0 00:23:42.585 13:34:56 version -- app/version.sh@20 -- # get_header_version suffix 00:23:42.585 13:34:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:23:42.585 13:34:56 version -- app/version.sh@14 -- # cut -f2 00:23:42.585 13:34:56 version -- app/version.sh@14 -- # tr -d '"' 00:23:42.585 13:34:56 version -- app/version.sh@20 -- # suffix=-pre 00:23:42.585 13:34:56 version -- app/version.sh@22 -- # version=25.1 00:23:42.585 13:34:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:23:42.585 13:34:56 version -- app/version.sh@28 -- # version=25.1rc0 00:23:42.585 13:34:56 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:42.585 13:34:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:23:42.585 13:34:56 version -- app/version.sh@30 -- # py_version=25.1rc0 00:23:42.585 13:34:56 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:23:42.585 00:23:42.585 real 0m0.266s 00:23:42.585 user 0m0.162s 00:23:42.585 sys 0m0.139s 00:23:42.585 ************************************ 00:23:42.585 END TEST version 00:23:42.585 ************************************ 00:23:42.585 13:34:56 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:42.585 13:34:56 version -- common/autotest_common.sh@10 -- # set +x 00:23:42.585 13:34:56 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:23:42.585 13:34:56 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:23:42.585 13:34:56 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:23:42.585 13:34:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:42.585 13:34:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:42.585 13:34:56 -- common/autotest_common.sh@10 -- # set +x 00:23:42.585 ************************************ 00:23:42.585 START TEST bdev_raid 00:23:42.585 ************************************ 00:23:42.585 13:34:56 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:23:42.585 * Looking for test storage... 00:23:42.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:23:42.585 13:34:56 bdev_raid -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:23:42.585 13:34:56 bdev_raid -- common/autotest_common.sh@1689 -- # lcov --version 00:23:42.585 13:34:56 bdev_raid -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:23:42.843 13:34:56 bdev_raid -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@345 -- # : 1 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.843 13:34:56 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:23:42.844 13:34:56 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.844 13:34:56 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.844 13:34:56 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.844 13:34:56 bdev_raid -- scripts/common.sh@368 -- # return 0 00:23:42.844 13:34:56 bdev_raid -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.844 13:34:56 bdev_raid -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:23:42.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.844 --rc genhtml_branch_coverage=1 00:23:42.844 --rc genhtml_function_coverage=1 00:23:42.844 --rc genhtml_legend=1 00:23:42.844 --rc geninfo_all_blocks=1 00:23:42.844 --rc geninfo_unexecuted_blocks=1 00:23:42.844 00:23:42.844 ' 00:23:42.844 13:34:56 bdev_raid -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:23:42.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.844 --rc genhtml_branch_coverage=1 00:23:42.844 --rc genhtml_function_coverage=1 00:23:42.844 --rc genhtml_legend=1 00:23:42.844 --rc geninfo_all_blocks=1 00:23:42.844 --rc geninfo_unexecuted_blocks=1 00:23:42.844 00:23:42.844 ' 00:23:42.844 13:34:56 bdev_raid -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:23:42.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.844 --rc genhtml_branch_coverage=1 00:23:42.844 --rc genhtml_function_coverage=1 00:23:42.844 --rc genhtml_legend=1 00:23:42.844 --rc geninfo_all_blocks=1 00:23:42.844 --rc geninfo_unexecuted_blocks=1 00:23:42.844 00:23:42.844 ' 00:23:42.844 13:34:56 bdev_raid -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:23:42.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.844 --rc genhtml_branch_coverage=1 00:23:42.844 --rc genhtml_function_coverage=1 00:23:42.844 --rc genhtml_legend=1 00:23:42.844 --rc geninfo_all_blocks=1 00:23:42.844 --rc geninfo_unexecuted_blocks=1 00:23:42.844 00:23:42.844 ' 00:23:42.844 13:34:56 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:23:42.844 13:34:56 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:23:42.844 13:34:56 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:23:42.844 13:34:56 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:23:42.844 13:34:56 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:23:42.844 13:34:56 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:23:42.844 13:34:56 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:23:42.844 13:34:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:42.844 13:34:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:42.844 13:34:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:42.844 ************************************ 00:23:42.844 START TEST raid1_resize_data_offset_test 00:23:42.844 ************************************ 00:23:42.844 13:34:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:23:42.844 13:34:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=73090 00:23:42.844 13:34:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 73090' 00:23:42.844 13:34:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:42.844 Process raid pid: 73090 00:23:42.844 13:34:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 73090 00:23:42.844 13:34:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 73090 ']' 00:23:42.844 13:34:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.844 13:34:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:42.844 13:34:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.844 13:34:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:42.844 13:34:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.844 [2024-10-28 13:34:56.894965] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:42.844 [2024-10-28 13:34:56.895727] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.102 [2024-10-28 13:34:57.050878] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:43.102 [2024-10-28 13:34:57.085673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.102 [2024-10-28 13:34:57.144627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.102 [2024-10-28 13:34:57.206277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:43.102 [2024-10-28 13:34:57.206326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:44.044 13:34:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:44.044 13:34:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:23:44.044 13:34:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:23:44.044 13:34:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.044 13:34:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.044 malloc0 00:23:44.044 13:34:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.044 malloc1 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.044 null0 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.044 [2024-10-28 13:34:58.053780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:23:44.044 [2024-10-28 13:34:58.056477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:44.044 [2024-10-28 13:34:58.056550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:23:44.044 [2024-10-28 13:34:58.056762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:23:44.044 [2024-10-28 13:34:58.056788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:23:44.044 [2024-10-28 13:34:58.057292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:44.044 [2024-10-28 13:34:58.057514] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:23:44.044 [2024-10-28 13:34:58.057571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:23:44.044 [2024-10-28 13:34:58.057949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.044 [2024-10-28 13:34:58.110017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.044 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.303 malloc2 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.303 [2024-10-28 13:34:58.258251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:44.303 [2024-10-28 13:34:58.265355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.303 [2024-10-28 13:34:58.268042] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 73090 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 73090 ']' 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 73090 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73090 00:23:44.303 killing process with pid 73090 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73090' 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 73090 00:23:44.303 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 73090 00:23:44.303 [2024-10-28 13:34:58.353843] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:44.303 [2024-10-28 13:34:58.355587] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:23:44.303 [2024-10-28 13:34:58.355682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.303 [2024-10-28 13:34:58.355714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:23:44.303 [2024-10-28 13:34:58.364221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:44.303 [2024-10-28 13:34:58.364610] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:44.303 [2024-10-28 13:34:58.364636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:23:44.561 [2024-10-28 13:34:58.636516] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:44.820 ************************************ 00:23:44.820 END TEST raid1_resize_data_offset_test 00:23:44.820 ************************************ 00:23:44.820 13:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:23:44.820 00:23:44.820 real 0m2.081s 00:23:44.820 user 0m2.240s 00:23:44.820 sys 0m0.531s 00:23:44.820 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:44.820 13:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.820 13:34:58 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:23:44.820 13:34:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:44.820 13:34:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:44.820 13:34:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:44.820 ************************************ 00:23:44.820 START TEST raid0_resize_superblock_test 00:23:44.820 ************************************ 00:23:44.820 13:34:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:23:44.820 13:34:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:23:44.820 Process raid pid: 73147 00:23:44.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.820 13:34:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73147 00:23:44.820 13:34:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73147' 00:23:44.820 13:34:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73147 00:23:44.820 13:34:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:44.820 13:34:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73147 ']' 00:23:44.820 13:34:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.820 13:34:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:44.820 13:34:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.820 13:34:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:44.820 13:34:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.079 [2024-10-28 13:34:59.073516] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:45.079 [2024-10-28 13:34:59.073827] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.079 [2024-10-28 13:34:59.234486] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:45.337 [2024-10-28 13:34:59.263971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.337 [2024-10-28 13:34:59.318715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.337 [2024-10-28 13:34:59.375430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:45.337 [2024-10-28 13:34:59.375689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.287 malloc0 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.287 [2024-10-28 13:35:00.266327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:23:46.287 [2024-10-28 13:35:00.266599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.287 [2024-10-28 13:35:00.266846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:46.287 [2024-10-28 13:35:00.267060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.287 [2024-10-28 13:35:00.271328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.287 [2024-10-28 13:35:00.271551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:23:46.287 pt0 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.287 5bc671ed-38ce-4943-9a33-2f4549bf960b 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.287 77731de2-6c95-42e8-89d0-5af4eb3096e8 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.287 00894e03-61bf-44c9-988d-b2d8ba9ba31e 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.287 [2024-10-28 13:35:00.424488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 77731de2-6c95-42e8-89d0-5af4eb3096e8 is claimed 00:23:46.287 [2024-10-28 13:35:00.424613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 00894e03-61bf-44c9-988d-b2d8ba9ba31e is claimed 00:23:46.287 [2024-10-28 13:35:00.424797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:23:46.287 [2024-10-28 13:35:00.424815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:23:46.287 [2024-10-28 13:35:00.425223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:46.287 [2024-10-28 13:35:00.425443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:23:46.287 [2024-10-28 13:35:00.425466] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:23:46.287 [2024-10-28 13:35:00.425644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:23:46.287 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.545 [2024-10-28 13:35:00.540858] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.545 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.545 [2024-10-28 13:35:00.588853] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:46.546 [2024-10-28 13:35:00.588905] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '77731de2-6c95-42e8-89d0-5af4eb3096e8' was resized: old size 131072, new size 204800 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.546 [2024-10-28 13:35:00.596680] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:46.546 [2024-10-28 13:35:00.596727] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '00894e03-61bf-44c9-988d-b2d8ba9ba31e' was resized: old size 131072, new size 204800 00:23:46.546 [2024-10-28 13:35:00.596768] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:23:46.546 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.804 [2024-10-28 13:35:00.708873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.804 [2024-10-28 13:35:00.760628] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:23:46.804 [2024-10-28 13:35:00.760756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:23:46.804 [2024-10-28 13:35:00.760775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:46.804 [2024-10-28 13:35:00.760796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:23:46.804 [2024-10-28 13:35:00.760955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.804 [2024-10-28 13:35:00.761015] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.804 [2024-10-28 13:35:00.761045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.804 [2024-10-28 13:35:00.768516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:23:46.804 [2024-10-28 13:35:00.768587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.804 [2024-10-28 13:35:00.768622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:46.804 [2024-10-28 13:35:00.768638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.804 [2024-10-28 13:35:00.771900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.804 [2024-10-28 13:35:00.772068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:23:46.804 pt0 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.804 [2024-10-28 13:35:00.774480] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 77731de2-6c95-42e8-89d0-5af4eb3096e8 00:23:46.804 [2024-10-28 13:35:00.774590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 77731de2-6c95-42e8-89d0-5af4eb3096e8 is claimed 00:23:46.804 [2024-10-28 13:35:00.774755] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 00894e03-61bf-44c9-988d-b2d8ba9ba31e 00:23:46.804 [2024-10-28 13:35:00.774846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 00894e03-61bf-44c9-988d-b2d8ba9ba31e is claimed 00:23:46.804 [2024-10-28 13:35:00.775007] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 00894e03-61bf-44c9-988d-b2d8ba9ba31e (2) smaller than existing raid bdev Raid (3) 00:23:46.804 [2024-10-28 13:35:00.775048] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 77731de2-6c95-42e8-89d0-5af4eb3096e8: File exists 00:23:46.804 [2024-10-28 13:35:00.775125] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:46.804 [2024-10-28 13:35:00.775417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:23:46.804 [2024-10-28 13:35:00.775934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:23:46.804 [2024-10-28 13:35:00.776279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:46.804 [2024-10-28 13:35:00.776317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:23:46.804 [2024-10-28 13:35:00.776613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.804 [2024-10-28 13:35:00.788963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73147 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73147 ']' 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73147 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73147 00:23:46.804 killing process with pid 73147 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73147' 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 73147 00:23:46.804 [2024-10-28 13:35:00.866760] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:46.804 13:35:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 73147 00:23:46.804 [2024-10-28 13:35:00.866885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.804 [2024-10-28 13:35:00.866951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.804 [2024-10-28 13:35:00.866981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:23:47.062 [2024-10-28 13:35:01.072066] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:47.321 ************************************ 00:23:47.321 END TEST raid0_resize_superblock_test 00:23:47.321 ************************************ 00:23:47.321 13:35:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:23:47.321 00:23:47.321 real 0m2.387s 00:23:47.321 user 0m2.867s 00:23:47.321 sys 0m0.579s 00:23:47.321 13:35:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:47.321 13:35:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.321 13:35:01 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:23:47.321 13:35:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:47.321 13:35:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:47.321 13:35:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:47.321 ************************************ 00:23:47.321 START TEST raid1_resize_superblock_test 00:23:47.321 ************************************ 00:23:47.321 13:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:23:47.321 Process raid pid: 73218 00:23:47.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.321 13:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:23:47.321 13:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73218 00:23:47.321 13:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:47.321 13:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73218' 00:23:47.321 13:35:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73218 00:23:47.321 13:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73218 ']' 00:23:47.321 13:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.321 13:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:47.321 13:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.321 13:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:47.321 13:35:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.321 [2024-10-28 13:35:01.461997] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:47.321 [2024-10-28 13:35:01.462513] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.580 [2024-10-28 13:35:01.618643] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:47.580 [2024-10-28 13:35:01.654603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.580 [2024-10-28 13:35:01.715343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.839 [2024-10-28 13:35:01.779354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:47.839 [2024-10-28 13:35:01.779668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:48.405 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:48.405 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:23:48.405 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:23:48.405 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.405 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.664 malloc0 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.664 [2024-10-28 13:35:02.607576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:23:48.664 [2024-10-28 13:35:02.607828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.664 [2024-10-28 13:35:02.607880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:48.664 [2024-10-28 13:35:02.607900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.664 [2024-10-28 13:35:02.610909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.664 [2024-10-28 13:35:02.611090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:23:48.664 pt0 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.664 cd0dd506-ae76-4546-8f9d-97a1f2a467fe 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.664 4a2172c0-9288-4e78-bb24-5f9f6240f06e 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.664 b0f3f0ce-3098-40ff-b9bf-560e9d7111ec 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.664 [2024-10-28 13:35:02.759883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4a2172c0-9288-4e78-bb24-5f9f6240f06e is claimed 00:23:48.664 [2024-10-28 13:35:02.760012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b0f3f0ce-3098-40ff-b9bf-560e9d7111ec is claimed 00:23:48.664 [2024-10-28 13:35:02.760254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:23:48.664 [2024-10-28 13:35:02.760283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:23:48.664 [2024-10-28 13:35:02.760640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:48.664 [2024-10-28 13:35:02.760869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:23:48.664 [2024-10-28 13:35:02.760893] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:23:48.664 [2024-10-28 13:35:02.761070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.664 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:23:48.923 [2024-10-28 13:35:02.876281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.923 [2024-10-28 13:35:02.928258] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:48.923 [2024-10-28 13:35:02.928423] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4a2172c0-9288-4e78-bb24-5f9f6240f06e' was resized: old size 131072, new size 204800 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.923 [2024-10-28 13:35:02.936101] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:48.923 [2024-10-28 13:35:02.936277] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b0f3f0ce-3098-40ff-b9bf-560e9d7111ec' was resized: old size 131072, new size 204800 00:23:48.923 [2024-10-28 13:35:02.936330] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.923 13:35:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.923 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.923 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:23:48.923 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:48.923 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:48.923 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.923 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.923 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:48.923 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:23:48.923 [2024-10-28 13:35:03.048280] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:48.923 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.182 [2024-10-28 13:35:03.092037] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:23:49.182 [2024-10-28 13:35:03.092306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:23:49.182 [2024-10-28 13:35:03.092360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:23:49.182 [2024-10-28 13:35:03.092579] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:49.182 [2024-10-28 13:35:03.092828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:49.182 [2024-10-28 13:35:03.092917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:49.182 [2024-10-28 13:35:03.092943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.182 [2024-10-28 13:35:03.099916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:23:49.182 [2024-10-28 13:35:03.100099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.182 [2024-10-28 13:35:03.100159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:49.182 [2024-10-28 13:35:03.100178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.182 [2024-10-28 13:35:03.103186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.182 [2024-10-28 13:35:03.103242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:23:49.182 pt0 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.182 [2024-10-28 13:35:03.105384] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4a2172c0-9288-4e78-bb24-5f9f6240f06e 00:23:49.182 [2024-10-28 13:35:03.105444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4a2172c0-9288-4e78-bb24-5f9f6240f06e is claimed 00:23:49.182 [2024-10-28 13:35:03.105560] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b0f3f0ce-3098-40ff-b9bf-560e9d7111ec 00:23:49.182 [2024-10-28 13:35:03.105594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b0f3f0ce-3098-40ff-b9bf-560e9d7111ec is claimed 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.182 [2024-10-28 13:35:03.105745] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev b0f3f0ce-3098-40ff-b9bf-560e9d7111ec (2) smaller than existing raid bdev Raid (3) 00:23:49.182 [2024-10-28 13:35:03.105772] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 4a2172c0-9288-4e78-bb24-5f9f6240f06e: File exists 00:23:49.182 [2024-10-28 13:35:03.105836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:49.182 [2024-10-28 13:35:03.105849] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:49.182 [2024-10-28 13:35:03.106161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:23:49.182 [2024-10-28 13:35:03.106340] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:49.182 [2024-10-28 13:35:03.106461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:23:49.182 [2024-10-28 13:35:03.106625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:23:49.182 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.183 [2024-10-28 13:35:03.120282] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73218 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73218 ']' 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73218 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73218 00:23:49.183 killing process with pid 73218 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73218' 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 73218 00:23:49.183 [2024-10-28 13:35:03.194133] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:49.183 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 73218 00:23:49.183 [2024-10-28 13:35:03.194258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:49.183 [2024-10-28 13:35:03.194338] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:49.183 [2024-10-28 13:35:03.194359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:23:49.441 [2024-10-28 13:35:03.398415] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:49.700 ************************************ 00:23:49.700 END TEST raid1_resize_superblock_test 00:23:49.700 ************************************ 00:23:49.700 13:35:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:23:49.700 00:23:49.700 real 0m2.275s 00:23:49.700 user 0m2.658s 00:23:49.700 sys 0m0.560s 00:23:49.700 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:49.700 13:35:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.700 13:35:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:23:49.700 13:35:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:23:49.700 13:35:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:23:49.700 13:35:03 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:23:49.700 13:35:03 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:23:49.700 13:35:03 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:23:49.700 13:35:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:49.700 13:35:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:49.700 13:35:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:49.700 ************************************ 00:23:49.700 START TEST raid_function_test_raid0 00:23:49.700 ************************************ 00:23:49.700 13:35:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:23:49.700 13:35:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:23:49.700 13:35:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:23:49.700 13:35:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:23:49.700 Process raid pid: 73293 00:23:49.700 13:35:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=73293 00:23:49.700 13:35:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73293' 00:23:49.700 13:35:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:49.700 13:35:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 73293 00:23:49.700 13:35:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 73293 ']' 00:23:49.700 13:35:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.700 13:35:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.700 13:35:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.700 13:35:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.700 13:35:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:23:49.700 [2024-10-28 13:35:03.805958] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:49.700 [2024-10-28 13:35:03.806178] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.959 [2024-10-28 13:35:03.963128] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:49.959 [2024-10-28 13:35:03.993594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.959 [2024-10-28 13:35:04.050478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.959 [2024-10-28 13:35:04.110537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:49.959 [2024-10-28 13:35:04.110586] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:50.896 13:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.896 13:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:23:50.896 13:35:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:23:50.896 13:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.896 13:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:23:50.896 Base_1 00:23:50.896 13:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.896 13:35:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:23:50.896 13:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.896 13:35:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:23:50.896 Base_2 00:23:50.896 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.897 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:23:50.897 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.897 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:23:50.897 [2024-10-28 13:35:05.020374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:23:50.897 [2024-10-28 13:35:05.023679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:23:50.897 [2024-10-28 13:35:05.023866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:23:50.897 [2024-10-28 13:35:05.023884] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:50.897 [2024-10-28 13:35:05.024389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:50.897 [2024-10-28 13:35:05.024609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:23:50.897 [2024-10-28 13:35:05.024634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007400 00:23:50.897 [2024-10-28 13:35:05.025027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.897 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.897 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:50.897 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:23:50.897 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.897 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:23:50.897 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.155 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:23:51.155 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:23:51.155 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:23:51.155 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:51.155 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:23:51.155 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:51.155 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:51.155 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:51.155 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:23:51.155 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:51.155 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:51.155 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:23:51.412 [2024-10-28 13:35:05.337089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:23:51.412 /dev/nbd0 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:51.412 1+0 records in 00:23:51.412 1+0 records out 00:23:51.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275175 s, 14.9 MB/s 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:23:51.412 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:51.670 { 00:23:51.670 "nbd_device": "/dev/nbd0", 00:23:51.670 "bdev_name": "raid" 00:23:51.670 } 00:23:51.670 ]' 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:51.670 { 00:23:51.670 "nbd_device": "/dev/nbd0", 00:23:51.670 "bdev_name": "raid" 00:23:51.670 } 00:23:51.670 ]' 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:23:51.670 4096+0 records in 00:23:51.670 4096+0 records out 00:23:51.670 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0348283 s, 60.2 MB/s 00:23:51.670 13:35:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:23:51.928 4096+0 records in 00:23:51.928 4096+0 records out 00:23:51.928 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.303505 s, 6.9 MB/s 00:23:51.928 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:23:52.188 128+0 records in 00:23:52.188 128+0 records out 00:23:52.188 65536 bytes (66 kB, 64 KiB) copied, 0.000510573 s, 128 MB/s 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:23:52.188 2035+0 records in 00:23:52.188 2035+0 records out 00:23:52.188 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0117075 s, 89.0 MB/s 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:23:52.188 456+0 records in 00:23:52.188 456+0 records out 00:23:52.188 233472 bytes (233 kB, 228 KiB) copied, 0.00167709 s, 139 MB/s 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:52.188 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:52.447 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:52.447 [2024-10-28 13:35:06.421668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:52.447 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:52.447 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:52.447 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:52.447 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:52.447 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:52.447 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:23:52.447 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:23:52.447 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:23:52.447 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:23:52.447 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:23:52.705 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:52.705 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 73293 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 73293 ']' 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 73293 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73293 00:23:52.706 killing process with pid 73293 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73293' 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 73293 00:23:52.706 [2024-10-28 13:35:06.785154] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:52.706 13:35:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 73293 00:23:52.706 [2024-10-28 13:35:06.785277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:52.706 [2024-10-28 13:35:06.785352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:52.706 [2024-10-28 13:35:06.785368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid, state offline 00:23:52.706 [2024-10-28 13:35:06.809923] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:52.965 13:35:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:23:52.965 00:23:52.965 real 0m3.338s 00:23:52.965 user 0m4.432s 00:23:52.965 sys 0m0.950s 00:23:52.965 13:35:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:52.965 13:35:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:23:52.965 ************************************ 00:23:52.965 END TEST raid_function_test_raid0 00:23:52.965 ************************************ 00:23:52.965 13:35:07 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:23:52.965 13:35:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:52.965 13:35:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:52.965 13:35:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:52.965 ************************************ 00:23:52.965 START TEST raid_function_test_concat 00:23:52.965 ************************************ 00:23:52.965 13:35:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:23:52.965 13:35:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:23:52.965 13:35:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:23:52.965 13:35:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:23:52.965 13:35:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=73417 00:23:52.965 Process raid pid: 73417 00:23:52.965 13:35:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73417' 00:23:52.965 13:35:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 73417 00:23:52.965 13:35:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:52.965 13:35:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 73417 ']' 00:23:52.965 13:35:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.965 13:35:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.965 13:35:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.965 13:35:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.965 13:35:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:23:53.223 [2024-10-28 13:35:07.201189] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:53.223 [2024-10-28 13:35:07.201394] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.223 [2024-10-28 13:35:07.358267] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:53.482 [2024-10-28 13:35:07.389101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.482 [2024-10-28 13:35:07.446171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.482 [2024-10-28 13:35:07.506816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:53.482 [2024-10-28 13:35:07.506870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:23:54.417 Base_1 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:23:54.417 Base_2 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:23:54.417 [2024-10-28 13:35:08.324818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:23:54.417 [2024-10-28 13:35:08.327349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:23:54.417 [2024-10-28 13:35:08.327459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:23:54.417 [2024-10-28 13:35:08.327475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:54.417 [2024-10-28 13:35:08.327830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:54.417 [2024-10-28 13:35:08.327996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:23:54.417 [2024-10-28 13:35:08.328015] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007400 00:23:54.417 [2024-10-28 13:35:08.328208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:54.417 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:23:54.677 [2024-10-28 13:35:08.676976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:23:54.677 /dev/nbd0 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:54.677 1+0 records in 00:23:54.677 1+0 records out 00:23:54.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374676 s, 10.9 MB/s 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:23:54.677 13:35:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:23:54.936 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:54.936 { 00:23:54.936 "nbd_device": "/dev/nbd0", 00:23:54.936 "bdev_name": "raid" 00:23:54.936 } 00:23:54.936 ]' 00:23:54.936 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:54.936 { 00:23:54.936 "nbd_device": "/dev/nbd0", 00:23:54.936 "bdev_name": "raid" 00:23:54.936 } 00:23:54.936 ]' 00:23:54.936 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:23:55.194 4096+0 records in 00:23:55.194 4096+0 records out 00:23:55.194 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0362233 s, 57.9 MB/s 00:23:55.194 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:23:55.452 4096+0 records in 00:23:55.452 4096+0 records out 00:23:55.452 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.319878 s, 6.6 MB/s 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:23:55.452 128+0 records in 00:23:55.452 128+0 records out 00:23:55.452 65536 bytes (66 kB, 64 KiB) copied, 0.00101454 s, 64.6 MB/s 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:23:55.452 2035+0 records in 00:23:55.452 2035+0 records out 00:23:55.452 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0132358 s, 78.7 MB/s 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:23:55.452 456+0 records in 00:23:55.452 456+0 records out 00:23:55.452 233472 bytes (233 kB, 228 KiB) copied, 0.00228429 s, 102 MB/s 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:55.452 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:56.017 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:56.017 [2024-10-28 13:35:09.930586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.017 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:56.017 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:56.017 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:56.017 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:56.017 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:56.017 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:23:56.017 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:23:56.017 13:35:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:23:56.017 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:23:56.017 13:35:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 73417 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 73417 ']' 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 73417 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73417 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:56.275 killing process with pid 73417 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73417' 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 73417 00:23:56.275 [2024-10-28 13:35:10.364781] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:56.275 13:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 73417 00:23:56.275 [2024-10-28 13:35:10.364971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:56.275 [2024-10-28 13:35:10.365077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:56.275 [2024-10-28 13:35:10.365093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid, state offline 00:23:56.275 [2024-10-28 13:35:10.407451] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:56.840 13:35:10 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:23:56.840 00:23:56.840 real 0m3.624s 00:23:56.840 user 0m4.847s 00:23:56.840 sys 0m1.021s 00:23:56.840 13:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:56.840 13:35:10 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:23:56.840 ************************************ 00:23:56.840 END TEST raid_function_test_concat 00:23:56.840 ************************************ 00:23:56.840 13:35:10 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:23:56.840 13:35:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:56.840 13:35:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:56.840 13:35:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:56.840 ************************************ 00:23:56.840 START TEST raid0_resize_test 00:23:56.840 ************************************ 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73539 00:23:56.840 Process raid pid: 73539 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73539' 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73539 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 73539 ']' 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:56.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:56.840 13:35:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.840 [2024-10-28 13:35:10.874863] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:56.840 [2024-10-28 13:35:10.875048] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.098 [2024-10-28 13:35:11.023910] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:57.098 [2024-10-28 13:35:11.054760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.098 [2024-10-28 13:35:11.124961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.098 [2024-10-28 13:35:11.202622] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:57.098 [2024-10-28 13:35:11.202690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:57.719 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:57.719 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:23:57.719 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:23:57.719 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.719 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.719 Base_1 00:23:57.719 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.719 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:23:57.719 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.719 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.719 Base_2 00:23:57.719 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.719 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:23:57.719 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:23:57.719 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.719 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.719 [2024-10-28 13:35:11.875527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:23:57.978 [2024-10-28 13:35:11.878318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:23:57.978 [2024-10-28 13:35:11.878410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:23:57.978 [2024-10-28 13:35:11.878434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:57.978 [2024-10-28 13:35:11.878831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:57.978 [2024-10-28 13:35:11.878993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:23:57.978 [2024-10-28 13:35:11.879013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:23:57.978 [2024-10-28 13:35:11.879249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.978 [2024-10-28 13:35:11.883436] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:57.978 [2024-10-28 13:35:11.883468] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:23:57.978 true 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.978 [2024-10-28 13:35:11.895762] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.978 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.978 [2024-10-28 13:35:11.943579] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:57.978 [2024-10-28 13:35:11.943679] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:23:57.978 [2024-10-28 13:35:11.943725] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:23:57.978 true 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.979 [2024-10-28 13:35:11.955803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73539 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 73539 ']' 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 73539 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:23:57.979 13:35:11 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.979 13:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73539 00:23:57.979 13:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:57.979 13:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:57.979 13:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73539' 00:23:57.979 killing process with pid 73539 00:23:57.979 13:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 73539 00:23:57.979 [2024-10-28 13:35:12.025043] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:57.979 13:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 73539 00:23:57.979 [2024-10-28 13:35:12.025265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:57.979 [2024-10-28 13:35:12.025343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:57.979 [2024-10-28 13:35:12.025372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:23:57.979 [2024-10-28 13:35:12.027812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:58.237 13:35:12 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:23:58.237 00:23:58.237 real 0m1.533s 00:23:58.237 user 0m1.776s 00:23:58.237 sys 0m0.366s 00:23:58.237 13:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:58.237 13:35:12 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.237 ************************************ 00:23:58.237 END TEST raid0_resize_test 00:23:58.237 ************************************ 00:23:58.237 13:35:12 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:23:58.237 13:35:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:58.237 13:35:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:58.237 13:35:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:58.237 ************************************ 00:23:58.237 START TEST raid1_resize_test 00:23:58.237 ************************************ 00:23:58.237 13:35:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:23:58.237 13:35:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:23:58.237 13:35:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:23:58.237 13:35:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73594 00:23:58.238 Process raid pid: 73594 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73594' 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73594 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 73594 ']' 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:58.238 13:35:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.496 [2024-10-28 13:35:12.476797] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:23:58.496 [2024-10-28 13:35:12.477041] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.496 [2024-10-28 13:35:12.634121] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:58.755 [2024-10-28 13:35:12.671071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.755 [2024-10-28 13:35:12.730028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.755 [2024-10-28 13:35:12.792619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:58.755 [2024-10-28 13:35:12.792690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.322 Base_1 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.322 Base_2 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.322 [2024-10-28 13:35:13.471479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:23:59.322 [2024-10-28 13:35:13.474054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:23:59.322 [2024-10-28 13:35:13.474171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:23:59.322 [2024-10-28 13:35:13.474188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:59.322 [2024-10-28 13:35:13.474595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:59.322 [2024-10-28 13:35:13.474754] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:23:59.322 [2024-10-28 13:35:13.474774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:23:59.322 [2024-10-28 13:35:13.474962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.322 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.322 [2024-10-28 13:35:13.479429] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:59.322 [2024-10-28 13:35:13.479469] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:23:59.582 true 00:23:59.582 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.582 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:59.582 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:23:59.582 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.582 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.582 [2024-10-28 13:35:13.491737] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:59.582 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.582 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:23:59.582 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:23:59.582 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:23:59.582 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:23:59.582 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:23:59.582 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:23:59.582 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.582 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.582 [2024-10-28 13:35:13.531509] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:23:59.582 [2024-10-28 13:35:13.531557] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:23:59.583 [2024-10-28 13:35:13.531603] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:23:59.583 true 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.583 [2024-10-28 13:35:13.543734] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73594 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 73594 ']' 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 73594 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73594 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:59.583 killing process with pid 73594 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73594' 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 73594 00:23:59.583 [2024-10-28 13:35:13.628427] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:59.583 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 73594 00:23:59.583 [2024-10-28 13:35:13.628571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:59.583 [2024-10-28 13:35:13.629266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:59.583 [2024-10-28 13:35:13.629302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:23:59.583 [2024-10-28 13:35:13.630781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:59.842 13:35:13 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:23:59.842 00:23:59.842 real 0m1.499s 00:23:59.842 user 0m1.761s 00:23:59.842 sys 0m0.359s 00:23:59.842 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:59.842 13:35:13 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.842 ************************************ 00:23:59.842 END TEST raid1_resize_test 00:23:59.842 ************************************ 00:23:59.842 13:35:13 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:23:59.842 13:35:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:23:59.842 13:35:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:23:59.842 13:35:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:59.842 13:35:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:59.842 13:35:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:59.842 ************************************ 00:23:59.842 START TEST raid_state_function_test 00:23:59.842 ************************************ 00:23:59.842 13:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:23:59.842 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:23:59.842 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:59.842 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:23:59.842 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:59.842 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:59.842 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:59.842 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:59.842 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:59.842 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:59.842 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:59.842 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73641 00:23:59.843 Process raid pid: 73641 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73641' 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73641 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73641 ']' 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:59.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:59.843 13:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.101 [2024-10-28 13:35:14.028013] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:24:00.101 [2024-10-28 13:35:14.028221] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.101 [2024-10-28 13:35:14.183361] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:00.101 [2024-10-28 13:35:14.211924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.360 [2024-10-28 13:35:14.265323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.360 [2024-10-28 13:35:14.322457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:00.360 [2024-10-28 13:35:14.322505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.926 [2024-10-28 13:35:15.075966] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:00.926 [2024-10-28 13:35:15.076030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:00.926 [2024-10-28 13:35:15.076051] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:00.926 [2024-10-28 13:35:15.076065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:00.926 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:01.187 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.188 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:01.188 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.188 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.188 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.188 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:01.188 "name": "Existed_Raid", 00:24:01.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.188 "strip_size_kb": 64, 00:24:01.188 "state": "configuring", 00:24:01.188 "raid_level": "raid0", 00:24:01.188 "superblock": false, 00:24:01.188 "num_base_bdevs": 2, 00:24:01.188 "num_base_bdevs_discovered": 0, 00:24:01.188 "num_base_bdevs_operational": 2, 00:24:01.188 "base_bdevs_list": [ 00:24:01.188 { 00:24:01.188 "name": "BaseBdev1", 00:24:01.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.188 "is_configured": false, 00:24:01.188 "data_offset": 0, 00:24:01.188 "data_size": 0 00:24:01.188 }, 00:24:01.188 { 00:24:01.188 "name": "BaseBdev2", 00:24:01.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.188 "is_configured": false, 00:24:01.188 "data_offset": 0, 00:24:01.188 "data_size": 0 00:24:01.188 } 00:24:01.188 ] 00:24:01.188 }' 00:24:01.188 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:01.188 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.458 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:01.458 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.458 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.729 [2024-10-28 13:35:15.607963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:01.729 [2024-10-28 13:35:15.608008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.729 [2024-10-28 13:35:15.615995] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:01.729 [2024-10-28 13:35:15.616046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:01.729 [2024-10-28 13:35:15.616066] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:01.729 [2024-10-28 13:35:15.616079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.729 [2024-10-28 13:35:15.636042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:01.729 BaseBdev1 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.729 [ 00:24:01.729 { 00:24:01.729 "name": "BaseBdev1", 00:24:01.729 "aliases": [ 00:24:01.729 "0fbf34fa-ce4d-4126-997f-c3bc8ca6d27c" 00:24:01.729 ], 00:24:01.729 "product_name": "Malloc disk", 00:24:01.729 "block_size": 512, 00:24:01.729 "num_blocks": 65536, 00:24:01.729 "uuid": "0fbf34fa-ce4d-4126-997f-c3bc8ca6d27c", 00:24:01.729 "assigned_rate_limits": { 00:24:01.729 "rw_ios_per_sec": 0, 00:24:01.729 "rw_mbytes_per_sec": 0, 00:24:01.729 "r_mbytes_per_sec": 0, 00:24:01.729 "w_mbytes_per_sec": 0 00:24:01.729 }, 00:24:01.729 "claimed": true, 00:24:01.729 "claim_type": "exclusive_write", 00:24:01.729 "zoned": false, 00:24:01.729 "supported_io_types": { 00:24:01.729 "read": true, 00:24:01.729 "write": true, 00:24:01.729 "unmap": true, 00:24:01.729 "flush": true, 00:24:01.729 "reset": true, 00:24:01.729 "nvme_admin": false, 00:24:01.729 "nvme_io": false, 00:24:01.729 "nvme_io_md": false, 00:24:01.729 "write_zeroes": true, 00:24:01.729 "zcopy": true, 00:24:01.729 "get_zone_info": false, 00:24:01.729 "zone_management": false, 00:24:01.729 "zone_append": false, 00:24:01.729 "compare": false, 00:24:01.729 "compare_and_write": false, 00:24:01.729 "abort": true, 00:24:01.729 "seek_hole": false, 00:24:01.729 "seek_data": false, 00:24:01.729 "copy": true, 00:24:01.729 "nvme_iov_md": false 00:24:01.729 }, 00:24:01.729 "memory_domains": [ 00:24:01.729 { 00:24:01.729 "dma_device_id": "system", 00:24:01.729 "dma_device_type": 1 00:24:01.729 }, 00:24:01.729 { 00:24:01.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.729 "dma_device_type": 2 00:24:01.729 } 00:24:01.729 ], 00:24:01.729 "driver_specific": {} 00:24:01.729 } 00:24:01.729 ] 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.729 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:01.729 "name": "Existed_Raid", 00:24:01.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.729 "strip_size_kb": 64, 00:24:01.729 "state": "configuring", 00:24:01.729 "raid_level": "raid0", 00:24:01.729 "superblock": false, 00:24:01.729 "num_base_bdevs": 2, 00:24:01.729 "num_base_bdevs_discovered": 1, 00:24:01.729 "num_base_bdevs_operational": 2, 00:24:01.729 "base_bdevs_list": [ 00:24:01.729 { 00:24:01.729 "name": "BaseBdev1", 00:24:01.729 "uuid": "0fbf34fa-ce4d-4126-997f-c3bc8ca6d27c", 00:24:01.729 "is_configured": true, 00:24:01.729 "data_offset": 0, 00:24:01.729 "data_size": 65536 00:24:01.729 }, 00:24:01.729 { 00:24:01.729 "name": "BaseBdev2", 00:24:01.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.729 "is_configured": false, 00:24:01.729 "data_offset": 0, 00:24:01.729 "data_size": 0 00:24:01.729 } 00:24:01.729 ] 00:24:01.729 }' 00:24:01.730 13:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:01.730 13:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.296 [2024-10-28 13:35:16.152246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:02.296 [2024-10-28 13:35:16.152321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.296 [2024-10-28 13:35:16.160256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:02.296 [2024-10-28 13:35:16.162809] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:02.296 [2024-10-28 13:35:16.162860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:02.296 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:02.297 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:02.297 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:02.297 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:02.297 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:02.297 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.297 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.297 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.297 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:02.297 "name": "Existed_Raid", 00:24:02.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.297 "strip_size_kb": 64, 00:24:02.297 "state": "configuring", 00:24:02.297 "raid_level": "raid0", 00:24:02.297 "superblock": false, 00:24:02.297 "num_base_bdevs": 2, 00:24:02.297 "num_base_bdevs_discovered": 1, 00:24:02.297 "num_base_bdevs_operational": 2, 00:24:02.297 "base_bdevs_list": [ 00:24:02.297 { 00:24:02.297 "name": "BaseBdev1", 00:24:02.297 "uuid": "0fbf34fa-ce4d-4126-997f-c3bc8ca6d27c", 00:24:02.297 "is_configured": true, 00:24:02.297 "data_offset": 0, 00:24:02.297 "data_size": 65536 00:24:02.297 }, 00:24:02.297 { 00:24:02.297 "name": "BaseBdev2", 00:24:02.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.297 "is_configured": false, 00:24:02.297 "data_offset": 0, 00:24:02.297 "data_size": 0 00:24:02.297 } 00:24:02.297 ] 00:24:02.297 }' 00:24:02.297 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:02.297 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.555 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:02.555 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.555 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.555 [2024-10-28 13:35:16.658341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:02.555 [2024-10-28 13:35:16.658399] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:02.555 [2024-10-28 13:35:16.658421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:02.555 [2024-10-28 13:35:16.658755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:02.556 [2024-10-28 13:35:16.658953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:02.556 [2024-10-28 13:35:16.658976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:24:02.556 [2024-10-28 13:35:16.659257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.556 BaseBdev2 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.556 [ 00:24:02.556 { 00:24:02.556 "name": "BaseBdev2", 00:24:02.556 "aliases": [ 00:24:02.556 "09103076-5500-456d-977e-87f1185b24df" 00:24:02.556 ], 00:24:02.556 "product_name": "Malloc disk", 00:24:02.556 "block_size": 512, 00:24:02.556 "num_blocks": 65536, 00:24:02.556 "uuid": "09103076-5500-456d-977e-87f1185b24df", 00:24:02.556 "assigned_rate_limits": { 00:24:02.556 "rw_ios_per_sec": 0, 00:24:02.556 "rw_mbytes_per_sec": 0, 00:24:02.556 "r_mbytes_per_sec": 0, 00:24:02.556 "w_mbytes_per_sec": 0 00:24:02.556 }, 00:24:02.556 "claimed": true, 00:24:02.556 "claim_type": "exclusive_write", 00:24:02.556 "zoned": false, 00:24:02.556 "supported_io_types": { 00:24:02.556 "read": true, 00:24:02.556 "write": true, 00:24:02.556 "unmap": true, 00:24:02.556 "flush": true, 00:24:02.556 "reset": true, 00:24:02.556 "nvme_admin": false, 00:24:02.556 "nvme_io": false, 00:24:02.556 "nvme_io_md": false, 00:24:02.556 "write_zeroes": true, 00:24:02.556 "zcopy": true, 00:24:02.556 "get_zone_info": false, 00:24:02.556 "zone_management": false, 00:24:02.556 "zone_append": false, 00:24:02.556 "compare": false, 00:24:02.556 "compare_and_write": false, 00:24:02.556 "abort": true, 00:24:02.556 "seek_hole": false, 00:24:02.556 "seek_data": false, 00:24:02.556 "copy": true, 00:24:02.556 "nvme_iov_md": false 00:24:02.556 }, 00:24:02.556 "memory_domains": [ 00:24:02.556 { 00:24:02.556 "dma_device_id": "system", 00:24:02.556 "dma_device_type": 1 00:24:02.556 }, 00:24:02.556 { 00:24:02.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.556 "dma_device_type": 2 00:24:02.556 } 00:24:02.556 ], 00:24:02.556 "driver_specific": {} 00:24:02.556 } 00:24:02.556 ] 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:02.556 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.815 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:02.815 "name": "Existed_Raid", 00:24:02.815 "uuid": "8ac6a261-432c-4c9a-90b5-f8f16e3a2b39", 00:24:02.815 "strip_size_kb": 64, 00:24:02.815 "state": "online", 00:24:02.815 "raid_level": "raid0", 00:24:02.815 "superblock": false, 00:24:02.815 "num_base_bdevs": 2, 00:24:02.815 "num_base_bdevs_discovered": 2, 00:24:02.815 "num_base_bdevs_operational": 2, 00:24:02.815 "base_bdevs_list": [ 00:24:02.815 { 00:24:02.815 "name": "BaseBdev1", 00:24:02.815 "uuid": "0fbf34fa-ce4d-4126-997f-c3bc8ca6d27c", 00:24:02.815 "is_configured": true, 00:24:02.815 "data_offset": 0, 00:24:02.815 "data_size": 65536 00:24:02.815 }, 00:24:02.815 { 00:24:02.815 "name": "BaseBdev2", 00:24:02.815 "uuid": "09103076-5500-456d-977e-87f1185b24df", 00:24:02.815 "is_configured": true, 00:24:02.815 "data_offset": 0, 00:24:02.815 "data_size": 65536 00:24:02.815 } 00:24:02.815 ] 00:24:02.815 }' 00:24:02.815 13:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:02.815 13:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.073 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:03.073 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:03.073 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:03.073 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:03.073 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:03.073 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:03.073 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:03.073 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.073 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.073 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:03.073 [2024-10-28 13:35:17.218941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:03.331 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.331 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:03.331 "name": "Existed_Raid", 00:24:03.331 "aliases": [ 00:24:03.331 "8ac6a261-432c-4c9a-90b5-f8f16e3a2b39" 00:24:03.331 ], 00:24:03.331 "product_name": "Raid Volume", 00:24:03.331 "block_size": 512, 00:24:03.331 "num_blocks": 131072, 00:24:03.331 "uuid": "8ac6a261-432c-4c9a-90b5-f8f16e3a2b39", 00:24:03.331 "assigned_rate_limits": { 00:24:03.331 "rw_ios_per_sec": 0, 00:24:03.331 "rw_mbytes_per_sec": 0, 00:24:03.331 "r_mbytes_per_sec": 0, 00:24:03.331 "w_mbytes_per_sec": 0 00:24:03.331 }, 00:24:03.331 "claimed": false, 00:24:03.331 "zoned": false, 00:24:03.331 "supported_io_types": { 00:24:03.331 "read": true, 00:24:03.331 "write": true, 00:24:03.331 "unmap": true, 00:24:03.331 "flush": true, 00:24:03.331 "reset": true, 00:24:03.331 "nvme_admin": false, 00:24:03.331 "nvme_io": false, 00:24:03.331 "nvme_io_md": false, 00:24:03.331 "write_zeroes": true, 00:24:03.331 "zcopy": false, 00:24:03.331 "get_zone_info": false, 00:24:03.331 "zone_management": false, 00:24:03.331 "zone_append": false, 00:24:03.331 "compare": false, 00:24:03.331 "compare_and_write": false, 00:24:03.331 "abort": false, 00:24:03.331 "seek_hole": false, 00:24:03.331 "seek_data": false, 00:24:03.331 "copy": false, 00:24:03.331 "nvme_iov_md": false 00:24:03.331 }, 00:24:03.331 "memory_domains": [ 00:24:03.331 { 00:24:03.331 "dma_device_id": "system", 00:24:03.331 "dma_device_type": 1 00:24:03.331 }, 00:24:03.331 { 00:24:03.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.331 "dma_device_type": 2 00:24:03.331 }, 00:24:03.331 { 00:24:03.331 "dma_device_id": "system", 00:24:03.331 "dma_device_type": 1 00:24:03.331 }, 00:24:03.331 { 00:24:03.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.331 "dma_device_type": 2 00:24:03.331 } 00:24:03.331 ], 00:24:03.331 "driver_specific": { 00:24:03.331 "raid": { 00:24:03.331 "uuid": "8ac6a261-432c-4c9a-90b5-f8f16e3a2b39", 00:24:03.331 "strip_size_kb": 64, 00:24:03.331 "state": "online", 00:24:03.331 "raid_level": "raid0", 00:24:03.331 "superblock": false, 00:24:03.331 "num_base_bdevs": 2, 00:24:03.331 "num_base_bdevs_discovered": 2, 00:24:03.331 "num_base_bdevs_operational": 2, 00:24:03.331 "base_bdevs_list": [ 00:24:03.331 { 00:24:03.331 "name": "BaseBdev1", 00:24:03.331 "uuid": "0fbf34fa-ce4d-4126-997f-c3bc8ca6d27c", 00:24:03.331 "is_configured": true, 00:24:03.331 "data_offset": 0, 00:24:03.331 "data_size": 65536 00:24:03.331 }, 00:24:03.331 { 00:24:03.331 "name": "BaseBdev2", 00:24:03.331 "uuid": "09103076-5500-456d-977e-87f1185b24df", 00:24:03.331 "is_configured": true, 00:24:03.331 "data_offset": 0, 00:24:03.332 "data_size": 65536 00:24:03.332 } 00:24:03.332 ] 00:24:03.332 } 00:24:03.332 } 00:24:03.332 }' 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:03.332 BaseBdev2' 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.332 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.590 [2024-10-28 13:35:17.498787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:03.590 [2024-10-28 13:35:17.498827] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:03.590 [2024-10-28 13:35:17.498902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:03.590 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:03.591 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:03.591 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:03.591 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:03.591 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:03.591 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.591 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:03.591 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.591 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.591 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.591 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:03.591 "name": "Existed_Raid", 00:24:03.591 "uuid": "8ac6a261-432c-4c9a-90b5-f8f16e3a2b39", 00:24:03.591 "strip_size_kb": 64, 00:24:03.591 "state": "offline", 00:24:03.591 "raid_level": "raid0", 00:24:03.591 "superblock": false, 00:24:03.591 "num_base_bdevs": 2, 00:24:03.591 "num_base_bdevs_discovered": 1, 00:24:03.591 "num_base_bdevs_operational": 1, 00:24:03.591 "base_bdevs_list": [ 00:24:03.591 { 00:24:03.591 "name": null, 00:24:03.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.591 "is_configured": false, 00:24:03.591 "data_offset": 0, 00:24:03.591 "data_size": 65536 00:24:03.591 }, 00:24:03.591 { 00:24:03.591 "name": "BaseBdev2", 00:24:03.591 "uuid": "09103076-5500-456d-977e-87f1185b24df", 00:24:03.591 "is_configured": true, 00:24:03.591 "data_offset": 0, 00:24:03.591 "data_size": 65536 00:24:03.591 } 00:24:03.591 ] 00:24:03.591 }' 00:24:03.591 13:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:03.591 13:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.160 [2024-10-28 13:35:18.091937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:04.160 [2024-10-28 13:35:18.092028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73641 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73641 ']' 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73641 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73641 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:04.160 killing process with pid 73641 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73641' 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73641 00:24:04.160 [2024-10-28 13:35:18.224200] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:04.160 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73641 00:24:04.160 [2024-10-28 13:35:18.225519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:04.417 13:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:24:04.417 00:24:04.417 real 0m4.544s 00:24:04.417 user 0m7.468s 00:24:04.417 sys 0m0.727s 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.418 ************************************ 00:24:04.418 END TEST raid_state_function_test 00:24:04.418 ************************************ 00:24:04.418 13:35:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:24:04.418 13:35:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:04.418 13:35:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:04.418 13:35:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:04.418 ************************************ 00:24:04.418 START TEST raid_state_function_test_sb 00:24:04.418 ************************************ 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73889 00:24:04.418 Process raid pid: 73889 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73889' 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73889 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73889 ']' 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:04.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:04.418 13:35:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:04.675 [2024-10-28 13:35:18.613146] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:24:04.675 [2024-10-28 13:35:18.613387] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.675 [2024-10-28 13:35:18.764286] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:04.675 [2024-10-28 13:35:18.794955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.932 [2024-10-28 13:35:18.848817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.932 [2024-10-28 13:35:18.910914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:04.932 [2024-10-28 13:35:18.910960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:05.874 [2024-10-28 13:35:19.726843] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:05.874 [2024-10-28 13:35:19.726914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:05.874 [2024-10-28 13:35:19.726934] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:05.874 [2024-10-28 13:35:19.726947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:05.874 "name": "Existed_Raid", 00:24:05.874 "uuid": "9e16fc57-43ec-41a1-98f6-1a129643854c", 00:24:05.874 "strip_size_kb": 64, 00:24:05.874 "state": "configuring", 00:24:05.874 "raid_level": "raid0", 00:24:05.874 "superblock": true, 00:24:05.874 "num_base_bdevs": 2, 00:24:05.874 "num_base_bdevs_discovered": 0, 00:24:05.874 "num_base_bdevs_operational": 2, 00:24:05.874 "base_bdevs_list": [ 00:24:05.874 { 00:24:05.874 "name": "BaseBdev1", 00:24:05.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.874 "is_configured": false, 00:24:05.874 "data_offset": 0, 00:24:05.874 "data_size": 0 00:24:05.874 }, 00:24:05.874 { 00:24:05.874 "name": "BaseBdev2", 00:24:05.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.874 "is_configured": false, 00:24:05.874 "data_offset": 0, 00:24:05.874 "data_size": 0 00:24:05.874 } 00:24:05.874 ] 00:24:05.874 }' 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:05.874 13:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.133 [2024-10-28 13:35:20.242865] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:06.133 [2024-10-28 13:35:20.242910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.133 [2024-10-28 13:35:20.250951] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:06.133 [2024-10-28 13:35:20.251001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:06.133 [2024-10-28 13:35:20.251020] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:06.133 [2024-10-28 13:35:20.251033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.133 [2024-10-28 13:35:20.271125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:06.133 BaseBdev1 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.133 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.392 [ 00:24:06.392 { 00:24:06.392 "name": "BaseBdev1", 00:24:06.392 "aliases": [ 00:24:06.392 "fd28c7cb-c7f8-4686-9688-c2be22666d83" 00:24:06.392 ], 00:24:06.392 "product_name": "Malloc disk", 00:24:06.392 "block_size": 512, 00:24:06.392 "num_blocks": 65536, 00:24:06.392 "uuid": "fd28c7cb-c7f8-4686-9688-c2be22666d83", 00:24:06.392 "assigned_rate_limits": { 00:24:06.392 "rw_ios_per_sec": 0, 00:24:06.392 "rw_mbytes_per_sec": 0, 00:24:06.392 "r_mbytes_per_sec": 0, 00:24:06.392 "w_mbytes_per_sec": 0 00:24:06.392 }, 00:24:06.392 "claimed": true, 00:24:06.392 "claim_type": "exclusive_write", 00:24:06.392 "zoned": false, 00:24:06.392 "supported_io_types": { 00:24:06.392 "read": true, 00:24:06.392 "write": true, 00:24:06.392 "unmap": true, 00:24:06.392 "flush": true, 00:24:06.392 "reset": true, 00:24:06.392 "nvme_admin": false, 00:24:06.392 "nvme_io": false, 00:24:06.392 "nvme_io_md": false, 00:24:06.392 "write_zeroes": true, 00:24:06.392 "zcopy": true, 00:24:06.392 "get_zone_info": false, 00:24:06.392 "zone_management": false, 00:24:06.392 "zone_append": false, 00:24:06.392 "compare": false, 00:24:06.392 "compare_and_write": false, 00:24:06.392 "abort": true, 00:24:06.392 "seek_hole": false, 00:24:06.392 "seek_data": false, 00:24:06.392 "copy": true, 00:24:06.392 "nvme_iov_md": false 00:24:06.392 }, 00:24:06.392 "memory_domains": [ 00:24:06.392 { 00:24:06.392 "dma_device_id": "system", 00:24:06.392 "dma_device_type": 1 00:24:06.392 }, 00:24:06.392 { 00:24:06.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.392 "dma_device_type": 2 00:24:06.392 } 00:24:06.392 ], 00:24:06.392 "driver_specific": {} 00:24:06.392 } 00:24:06.392 ] 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:06.392 "name": "Existed_Raid", 00:24:06.392 "uuid": "1e47b1c7-c488-4526-8347-ce9f559c97e8", 00:24:06.392 "strip_size_kb": 64, 00:24:06.392 "state": "configuring", 00:24:06.392 "raid_level": "raid0", 00:24:06.392 "superblock": true, 00:24:06.392 "num_base_bdevs": 2, 00:24:06.392 "num_base_bdevs_discovered": 1, 00:24:06.392 "num_base_bdevs_operational": 2, 00:24:06.392 "base_bdevs_list": [ 00:24:06.392 { 00:24:06.392 "name": "BaseBdev1", 00:24:06.392 "uuid": "fd28c7cb-c7f8-4686-9688-c2be22666d83", 00:24:06.392 "is_configured": true, 00:24:06.392 "data_offset": 2048, 00:24:06.392 "data_size": 63488 00:24:06.392 }, 00:24:06.392 { 00:24:06.392 "name": "BaseBdev2", 00:24:06.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.392 "is_configured": false, 00:24:06.392 "data_offset": 0, 00:24:06.392 "data_size": 0 00:24:06.392 } 00:24:06.392 ] 00:24:06.392 }' 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:06.392 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.698 [2024-10-28 13:35:20.835316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:06.698 [2024-10-28 13:35:20.835390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.698 [2024-10-28 13:35:20.843350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:06.698 [2024-10-28 13:35:20.845918] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:06.698 [2024-10-28 13:35:20.845964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:06.698 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:06.957 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.957 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:06.957 "name": "Existed_Raid", 00:24:06.957 "uuid": "a4e170c6-aed8-47bf-a288-7be930aa00ac", 00:24:06.957 "strip_size_kb": 64, 00:24:06.957 "state": "configuring", 00:24:06.957 "raid_level": "raid0", 00:24:06.957 "superblock": true, 00:24:06.957 "num_base_bdevs": 2, 00:24:06.957 "num_base_bdevs_discovered": 1, 00:24:06.957 "num_base_bdevs_operational": 2, 00:24:06.957 "base_bdevs_list": [ 00:24:06.957 { 00:24:06.957 "name": "BaseBdev1", 00:24:06.957 "uuid": "fd28c7cb-c7f8-4686-9688-c2be22666d83", 00:24:06.957 "is_configured": true, 00:24:06.957 "data_offset": 2048, 00:24:06.957 "data_size": 63488 00:24:06.957 }, 00:24:06.957 { 00:24:06.957 "name": "BaseBdev2", 00:24:06.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.957 "is_configured": false, 00:24:06.957 "data_offset": 0, 00:24:06.957 "data_size": 0 00:24:06.957 } 00:24:06.957 ] 00:24:06.957 }' 00:24:06.957 13:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:06.957 13:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.216 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:07.216 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.216 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.475 [2024-10-28 13:35:21.388812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:07.475 [2024-10-28 13:35:21.389063] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:07.475 [2024-10-28 13:35:21.389105] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:07.475 BaseBdev2 00:24:07.475 [2024-10-28 13:35:21.389497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:07.475 [2024-10-28 13:35:21.389698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:07.475 [2024-10-28 13:35:21.389714] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.475 [2024-10-28 13:35:21.389889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.475 [ 00:24:07.475 { 00:24:07.475 "name": "BaseBdev2", 00:24:07.475 "aliases": [ 00:24:07.475 "6c9c6556-d275-4920-8437-9f3f9e131221" 00:24:07.475 ], 00:24:07.475 "product_name": "Malloc disk", 00:24:07.475 "block_size": 512, 00:24:07.475 "num_blocks": 65536, 00:24:07.475 "uuid": "6c9c6556-d275-4920-8437-9f3f9e131221", 00:24:07.475 "assigned_rate_limits": { 00:24:07.475 "rw_ios_per_sec": 0, 00:24:07.475 "rw_mbytes_per_sec": 0, 00:24:07.475 "r_mbytes_per_sec": 0, 00:24:07.475 "w_mbytes_per_sec": 0 00:24:07.475 }, 00:24:07.475 "claimed": true, 00:24:07.475 "claim_type": "exclusive_write", 00:24:07.475 "zoned": false, 00:24:07.475 "supported_io_types": { 00:24:07.475 "read": true, 00:24:07.475 "write": true, 00:24:07.475 "unmap": true, 00:24:07.475 "flush": true, 00:24:07.475 "reset": true, 00:24:07.475 "nvme_admin": false, 00:24:07.475 "nvme_io": false, 00:24:07.475 "nvme_io_md": false, 00:24:07.475 "write_zeroes": true, 00:24:07.475 "zcopy": true, 00:24:07.475 "get_zone_info": false, 00:24:07.475 "zone_management": false, 00:24:07.475 "zone_append": false, 00:24:07.475 "compare": false, 00:24:07.475 "compare_and_write": false, 00:24:07.475 "abort": true, 00:24:07.475 "seek_hole": false, 00:24:07.475 "seek_data": false, 00:24:07.475 "copy": true, 00:24:07.475 "nvme_iov_md": false 00:24:07.475 }, 00:24:07.475 "memory_domains": [ 00:24:07.475 { 00:24:07.475 "dma_device_id": "system", 00:24:07.475 "dma_device_type": 1 00:24:07.475 }, 00:24:07.475 { 00:24:07.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.475 "dma_device_type": 2 00:24:07.475 } 00:24:07.475 ], 00:24:07.475 "driver_specific": {} 00:24:07.475 } 00:24:07.475 ] 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:07.475 "name": "Existed_Raid", 00:24:07.475 "uuid": "a4e170c6-aed8-47bf-a288-7be930aa00ac", 00:24:07.475 "strip_size_kb": 64, 00:24:07.475 "state": "online", 00:24:07.475 "raid_level": "raid0", 00:24:07.475 "superblock": true, 00:24:07.475 "num_base_bdevs": 2, 00:24:07.475 "num_base_bdevs_discovered": 2, 00:24:07.475 "num_base_bdevs_operational": 2, 00:24:07.475 "base_bdevs_list": [ 00:24:07.475 { 00:24:07.475 "name": "BaseBdev1", 00:24:07.475 "uuid": "fd28c7cb-c7f8-4686-9688-c2be22666d83", 00:24:07.475 "is_configured": true, 00:24:07.475 "data_offset": 2048, 00:24:07.475 "data_size": 63488 00:24:07.475 }, 00:24:07.475 { 00:24:07.475 "name": "BaseBdev2", 00:24:07.475 "uuid": "6c9c6556-d275-4920-8437-9f3f9e131221", 00:24:07.475 "is_configured": true, 00:24:07.475 "data_offset": 2048, 00:24:07.475 "data_size": 63488 00:24:07.475 } 00:24:07.475 ] 00:24:07.475 }' 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:07.475 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.041 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:08.041 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:08.041 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:08.041 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:08.041 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:24:08.041 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:08.041 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:08.041 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:08.041 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.041 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.041 [2024-10-28 13:35:21.929445] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:08.041 13:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.041 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:08.041 "name": "Existed_Raid", 00:24:08.041 "aliases": [ 00:24:08.041 "a4e170c6-aed8-47bf-a288-7be930aa00ac" 00:24:08.041 ], 00:24:08.041 "product_name": "Raid Volume", 00:24:08.041 "block_size": 512, 00:24:08.041 "num_blocks": 126976, 00:24:08.041 "uuid": "a4e170c6-aed8-47bf-a288-7be930aa00ac", 00:24:08.041 "assigned_rate_limits": { 00:24:08.041 "rw_ios_per_sec": 0, 00:24:08.041 "rw_mbytes_per_sec": 0, 00:24:08.041 "r_mbytes_per_sec": 0, 00:24:08.041 "w_mbytes_per_sec": 0 00:24:08.041 }, 00:24:08.041 "claimed": false, 00:24:08.041 "zoned": false, 00:24:08.041 "supported_io_types": { 00:24:08.041 "read": true, 00:24:08.041 "write": true, 00:24:08.041 "unmap": true, 00:24:08.041 "flush": true, 00:24:08.041 "reset": true, 00:24:08.041 "nvme_admin": false, 00:24:08.041 "nvme_io": false, 00:24:08.041 "nvme_io_md": false, 00:24:08.041 "write_zeroes": true, 00:24:08.041 "zcopy": false, 00:24:08.042 "get_zone_info": false, 00:24:08.042 "zone_management": false, 00:24:08.042 "zone_append": false, 00:24:08.042 "compare": false, 00:24:08.042 "compare_and_write": false, 00:24:08.042 "abort": false, 00:24:08.042 "seek_hole": false, 00:24:08.042 "seek_data": false, 00:24:08.042 "copy": false, 00:24:08.042 "nvme_iov_md": false 00:24:08.042 }, 00:24:08.042 "memory_domains": [ 00:24:08.042 { 00:24:08.042 "dma_device_id": "system", 00:24:08.042 "dma_device_type": 1 00:24:08.042 }, 00:24:08.042 { 00:24:08.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.042 "dma_device_type": 2 00:24:08.042 }, 00:24:08.042 { 00:24:08.042 "dma_device_id": "system", 00:24:08.042 "dma_device_type": 1 00:24:08.042 }, 00:24:08.042 { 00:24:08.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.042 "dma_device_type": 2 00:24:08.042 } 00:24:08.042 ], 00:24:08.042 "driver_specific": { 00:24:08.042 "raid": { 00:24:08.042 "uuid": "a4e170c6-aed8-47bf-a288-7be930aa00ac", 00:24:08.042 "strip_size_kb": 64, 00:24:08.042 "state": "online", 00:24:08.042 "raid_level": "raid0", 00:24:08.042 "superblock": true, 00:24:08.042 "num_base_bdevs": 2, 00:24:08.042 "num_base_bdevs_discovered": 2, 00:24:08.042 "num_base_bdevs_operational": 2, 00:24:08.042 "base_bdevs_list": [ 00:24:08.042 { 00:24:08.042 "name": "BaseBdev1", 00:24:08.042 "uuid": "fd28c7cb-c7f8-4686-9688-c2be22666d83", 00:24:08.042 "is_configured": true, 00:24:08.042 "data_offset": 2048, 00:24:08.042 "data_size": 63488 00:24:08.042 }, 00:24:08.042 { 00:24:08.042 "name": "BaseBdev2", 00:24:08.042 "uuid": "6c9c6556-d275-4920-8437-9f3f9e131221", 00:24:08.042 "is_configured": true, 00:24:08.042 "data_offset": 2048, 00:24:08.042 "data_size": 63488 00:24:08.042 } 00:24:08.042 ] 00:24:08.042 } 00:24:08.042 } 00:24:08.042 }' 00:24:08.042 13:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:08.042 BaseBdev2' 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.042 [2024-10-28 13:35:22.173161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:08.042 [2024-10-28 13:35:22.173199] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:08.042 [2024-10-28 13:35:22.173287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.042 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:08.301 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.301 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:08.301 "name": "Existed_Raid", 00:24:08.301 "uuid": "a4e170c6-aed8-47bf-a288-7be930aa00ac", 00:24:08.301 "strip_size_kb": 64, 00:24:08.301 "state": "offline", 00:24:08.301 "raid_level": "raid0", 00:24:08.301 "superblock": true, 00:24:08.301 "num_base_bdevs": 2, 00:24:08.301 "num_base_bdevs_discovered": 1, 00:24:08.301 "num_base_bdevs_operational": 1, 00:24:08.301 "base_bdevs_list": [ 00:24:08.301 { 00:24:08.301 "name": null, 00:24:08.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.301 "is_configured": false, 00:24:08.301 "data_offset": 0, 00:24:08.301 "data_size": 63488 00:24:08.301 }, 00:24:08.301 { 00:24:08.301 "name": "BaseBdev2", 00:24:08.301 "uuid": "6c9c6556-d275-4920-8437-9f3f9e131221", 00:24:08.301 "is_configured": true, 00:24:08.301 "data_offset": 2048, 00:24:08.301 "data_size": 63488 00:24:08.301 } 00:24:08.301 ] 00:24:08.301 }' 00:24:08.301 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:08.301 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.559 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:08.559 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:08.559 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.559 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.559 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.559 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.819 [2024-10-28 13:35:22.765163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:08.819 [2024-10-28 13:35:22.765249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73889 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73889 ']' 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73889 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73889 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:08.819 killing process with pid 73889 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73889' 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73889 00:24:08.819 [2024-10-28 13:35:22.864599] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:08.819 13:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73889 00:24:08.819 [2024-10-28 13:35:22.865888] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:09.081 13:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:24:09.081 00:24:09.081 real 0m4.581s 00:24:09.081 user 0m7.557s 00:24:09.081 sys 0m0.725s 00:24:09.081 13:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:09.081 ************************************ 00:24:09.081 END TEST raid_state_function_test_sb 00:24:09.081 ************************************ 00:24:09.081 13:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:09.081 13:35:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:24:09.081 13:35:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:24:09.081 13:35:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:09.081 13:35:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:09.081 ************************************ 00:24:09.081 START TEST raid_superblock_test 00:24:09.081 ************************************ 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74135 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74135 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74135 ']' 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:09.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:09.081 13:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.339 [2024-10-28 13:35:23.263536] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:24:09.339 [2024-10-28 13:35:23.263763] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74135 ] 00:24:09.339 [2024-10-28 13:35:23.423612] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:09.339 [2024-10-28 13:35:23.458632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.598 [2024-10-28 13:35:23.516744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.598 [2024-10-28 13:35:23.579133] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:09.598 [2024-10-28 13:35:23.579200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.166 malloc1 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.166 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.166 [2024-10-28 13:35:24.264259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:10.167 [2024-10-28 13:35:24.264344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:10.167 [2024-10-28 13:35:24.264382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:10.167 [2024-10-28 13:35:24.264411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:10.167 [2024-10-28 13:35:24.267537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:10.167 [2024-10-28 13:35:24.267617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:10.167 pt1 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.167 malloc2 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.167 [2024-10-28 13:35:24.288763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:10.167 [2024-10-28 13:35:24.288833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:10.167 [2024-10-28 13:35:24.288863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:10.167 [2024-10-28 13:35:24.288877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:10.167 [2024-10-28 13:35:24.291895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:10.167 [2024-10-28 13:35:24.291940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:10.167 pt2 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.167 [2024-10-28 13:35:24.296934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:10.167 [2024-10-28 13:35:24.299727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:10.167 [2024-10-28 13:35:24.299925] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:24:10.167 [2024-10-28 13:35:24.299945] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:10.167 [2024-10-28 13:35:24.300515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:10.167 [2024-10-28 13:35:24.300910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:24:10.167 [2024-10-28 13:35:24.301055] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:24:10.167 [2024-10-28 13:35:24.301360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.167 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.426 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:10.426 "name": "raid_bdev1", 00:24:10.426 "uuid": "3bd0d4bc-8b02-4b54-8681-10990c8740f8", 00:24:10.426 "strip_size_kb": 64, 00:24:10.426 "state": "online", 00:24:10.426 "raid_level": "raid0", 00:24:10.426 "superblock": true, 00:24:10.426 "num_base_bdevs": 2, 00:24:10.426 "num_base_bdevs_discovered": 2, 00:24:10.426 "num_base_bdevs_operational": 2, 00:24:10.426 "base_bdevs_list": [ 00:24:10.426 { 00:24:10.426 "name": "pt1", 00:24:10.426 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:10.426 "is_configured": true, 00:24:10.426 "data_offset": 2048, 00:24:10.426 "data_size": 63488 00:24:10.426 }, 00:24:10.426 { 00:24:10.426 "name": "pt2", 00:24:10.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:10.426 "is_configured": true, 00:24:10.426 "data_offset": 2048, 00:24:10.426 "data_size": 63488 00:24:10.426 } 00:24:10.426 ] 00:24:10.426 }' 00:24:10.426 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:10.426 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.684 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:10.684 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:10.684 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:10.684 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:10.684 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:10.684 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:10.684 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:10.684 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.684 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:10.684 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.684 [2024-10-28 13:35:24.841952] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:10.942 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.942 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:10.942 "name": "raid_bdev1", 00:24:10.942 "aliases": [ 00:24:10.942 "3bd0d4bc-8b02-4b54-8681-10990c8740f8" 00:24:10.942 ], 00:24:10.942 "product_name": "Raid Volume", 00:24:10.942 "block_size": 512, 00:24:10.942 "num_blocks": 126976, 00:24:10.942 "uuid": "3bd0d4bc-8b02-4b54-8681-10990c8740f8", 00:24:10.942 "assigned_rate_limits": { 00:24:10.942 "rw_ios_per_sec": 0, 00:24:10.942 "rw_mbytes_per_sec": 0, 00:24:10.942 "r_mbytes_per_sec": 0, 00:24:10.942 "w_mbytes_per_sec": 0 00:24:10.942 }, 00:24:10.942 "claimed": false, 00:24:10.942 "zoned": false, 00:24:10.942 "supported_io_types": { 00:24:10.942 "read": true, 00:24:10.942 "write": true, 00:24:10.942 "unmap": true, 00:24:10.942 "flush": true, 00:24:10.942 "reset": true, 00:24:10.942 "nvme_admin": false, 00:24:10.942 "nvme_io": false, 00:24:10.942 "nvme_io_md": false, 00:24:10.942 "write_zeroes": true, 00:24:10.942 "zcopy": false, 00:24:10.942 "get_zone_info": false, 00:24:10.942 "zone_management": false, 00:24:10.942 "zone_append": false, 00:24:10.942 "compare": false, 00:24:10.942 "compare_and_write": false, 00:24:10.942 "abort": false, 00:24:10.942 "seek_hole": false, 00:24:10.942 "seek_data": false, 00:24:10.942 "copy": false, 00:24:10.942 "nvme_iov_md": false 00:24:10.942 }, 00:24:10.942 "memory_domains": [ 00:24:10.942 { 00:24:10.942 "dma_device_id": "system", 00:24:10.942 "dma_device_type": 1 00:24:10.942 }, 00:24:10.942 { 00:24:10.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.942 "dma_device_type": 2 00:24:10.942 }, 00:24:10.942 { 00:24:10.942 "dma_device_id": "system", 00:24:10.942 "dma_device_type": 1 00:24:10.942 }, 00:24:10.942 { 00:24:10.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.942 "dma_device_type": 2 00:24:10.942 } 00:24:10.943 ], 00:24:10.943 "driver_specific": { 00:24:10.943 "raid": { 00:24:10.943 "uuid": "3bd0d4bc-8b02-4b54-8681-10990c8740f8", 00:24:10.943 "strip_size_kb": 64, 00:24:10.943 "state": "online", 00:24:10.943 "raid_level": "raid0", 00:24:10.943 "superblock": true, 00:24:10.943 "num_base_bdevs": 2, 00:24:10.943 "num_base_bdevs_discovered": 2, 00:24:10.943 "num_base_bdevs_operational": 2, 00:24:10.943 "base_bdevs_list": [ 00:24:10.943 { 00:24:10.943 "name": "pt1", 00:24:10.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:10.943 "is_configured": true, 00:24:10.943 "data_offset": 2048, 00:24:10.943 "data_size": 63488 00:24:10.943 }, 00:24:10.943 { 00:24:10.943 "name": "pt2", 00:24:10.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:10.943 "is_configured": true, 00:24:10.943 "data_offset": 2048, 00:24:10.943 "data_size": 63488 00:24:10.943 } 00:24:10.943 ] 00:24:10.943 } 00:24:10.943 } 00:24:10.943 }' 00:24:10.943 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:10.943 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:10.943 pt2' 00:24:10.943 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:10.943 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:10.943 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:10.943 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:10.943 13:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:10.943 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.943 13:35:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.943 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.943 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:10.943 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:10.943 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:10.943 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:10.943 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:10.943 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.943 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.943 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.943 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:10.943 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.202 [2024-10-28 13:35:25.105860] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3bd0d4bc-8b02-4b54-8681-10990c8740f8 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3bd0d4bc-8b02-4b54-8681-10990c8740f8 ']' 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.202 [2024-10-28 13:35:25.153545] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:11.202 [2024-10-28 13:35:25.153578] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:11.202 [2024-10-28 13:35:25.153690] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:11.202 [2024-10-28 13:35:25.153765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:11.202 [2024-10-28 13:35:25.153800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.202 [2024-10-28 13:35:25.285636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:11.202 [2024-10-28 13:35:25.288407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:11.202 [2024-10-28 13:35:25.288610] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:11.202 [2024-10-28 13:35:25.288790] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:11.202 [2024-10-28 13:35:25.288821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:11.202 [2024-10-28 13:35:25.288837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:24:11.202 request: 00:24:11.202 { 00:24:11.202 "name": "raid_bdev1", 00:24:11.202 "raid_level": "raid0", 00:24:11.202 "base_bdevs": [ 00:24:11.202 "malloc1", 00:24:11.202 "malloc2" 00:24:11.202 ], 00:24:11.202 "strip_size_kb": 64, 00:24:11.202 "superblock": false, 00:24:11.202 "method": "bdev_raid_create", 00:24:11.202 "req_id": 1 00:24:11.202 } 00:24:11.202 Got JSON-RPC error response 00:24:11.202 response: 00:24:11.202 { 00:24:11.202 "code": -17, 00:24:11.202 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:11.202 } 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:11.202 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.203 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.203 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.203 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:11.203 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.203 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:11.203 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:11.203 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:11.203 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.203 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.203 [2024-10-28 13:35:25.357763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:11.203 [2024-10-28 13:35:25.357970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.203 [2024-10-28 13:35:25.358041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:11.203 [2024-10-28 13:35:25.358244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.461 [2024-10-28 13:35:25.361402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.461 [2024-10-28 13:35:25.361574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:11.461 [2024-10-28 13:35:25.361779] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:11.461 [2024-10-28 13:35:25.361953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:11.461 pt1 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:11.461 "name": "raid_bdev1", 00:24:11.461 "uuid": "3bd0d4bc-8b02-4b54-8681-10990c8740f8", 00:24:11.461 "strip_size_kb": 64, 00:24:11.461 "state": "configuring", 00:24:11.461 "raid_level": "raid0", 00:24:11.461 "superblock": true, 00:24:11.461 "num_base_bdevs": 2, 00:24:11.461 "num_base_bdevs_discovered": 1, 00:24:11.461 "num_base_bdevs_operational": 2, 00:24:11.461 "base_bdevs_list": [ 00:24:11.461 { 00:24:11.461 "name": "pt1", 00:24:11.461 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:11.461 "is_configured": true, 00:24:11.461 "data_offset": 2048, 00:24:11.461 "data_size": 63488 00:24:11.461 }, 00:24:11.461 { 00:24:11.461 "name": null, 00:24:11.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:11.461 "is_configured": false, 00:24:11.461 "data_offset": 2048, 00:24:11.461 "data_size": 63488 00:24:11.461 } 00:24:11.461 ] 00:24:11.461 }' 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:11.461 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.028 [2024-10-28 13:35:25.886066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:12.028 [2024-10-28 13:35:25.886188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.028 [2024-10-28 13:35:25.886223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:12.028 [2024-10-28 13:35:25.886241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.028 [2024-10-28 13:35:25.886802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.028 [2024-10-28 13:35:25.886839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:12.028 [2024-10-28 13:35:25.886941] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:12.028 [2024-10-28 13:35:25.886979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:12.028 [2024-10-28 13:35:25.887122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:12.028 [2024-10-28 13:35:25.887160] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:12.028 [2024-10-28 13:35:25.887460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:12.028 [2024-10-28 13:35:25.887649] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:12.028 [2024-10-28 13:35:25.887665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:12.028 [2024-10-28 13:35:25.887807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:12.028 pt2 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:12.028 "name": "raid_bdev1", 00:24:12.028 "uuid": "3bd0d4bc-8b02-4b54-8681-10990c8740f8", 00:24:12.028 "strip_size_kb": 64, 00:24:12.028 "state": "online", 00:24:12.028 "raid_level": "raid0", 00:24:12.028 "superblock": true, 00:24:12.028 "num_base_bdevs": 2, 00:24:12.028 "num_base_bdevs_discovered": 2, 00:24:12.028 "num_base_bdevs_operational": 2, 00:24:12.028 "base_bdevs_list": [ 00:24:12.028 { 00:24:12.028 "name": "pt1", 00:24:12.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:12.028 "is_configured": true, 00:24:12.028 "data_offset": 2048, 00:24:12.028 "data_size": 63488 00:24:12.028 }, 00:24:12.028 { 00:24:12.028 "name": "pt2", 00:24:12.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:12.028 "is_configured": true, 00:24:12.028 "data_offset": 2048, 00:24:12.028 "data_size": 63488 00:24:12.028 } 00:24:12.028 ] 00:24:12.028 }' 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:12.028 13:35:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.286 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:12.287 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:12.287 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:12.287 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:12.287 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:12.287 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:12.287 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:12.287 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:12.287 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.287 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.287 [2024-10-28 13:35:26.430530] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:12.545 "name": "raid_bdev1", 00:24:12.545 "aliases": [ 00:24:12.545 "3bd0d4bc-8b02-4b54-8681-10990c8740f8" 00:24:12.545 ], 00:24:12.545 "product_name": "Raid Volume", 00:24:12.545 "block_size": 512, 00:24:12.545 "num_blocks": 126976, 00:24:12.545 "uuid": "3bd0d4bc-8b02-4b54-8681-10990c8740f8", 00:24:12.545 "assigned_rate_limits": { 00:24:12.545 "rw_ios_per_sec": 0, 00:24:12.545 "rw_mbytes_per_sec": 0, 00:24:12.545 "r_mbytes_per_sec": 0, 00:24:12.545 "w_mbytes_per_sec": 0 00:24:12.545 }, 00:24:12.545 "claimed": false, 00:24:12.545 "zoned": false, 00:24:12.545 "supported_io_types": { 00:24:12.545 "read": true, 00:24:12.545 "write": true, 00:24:12.545 "unmap": true, 00:24:12.545 "flush": true, 00:24:12.545 "reset": true, 00:24:12.545 "nvme_admin": false, 00:24:12.545 "nvme_io": false, 00:24:12.545 "nvme_io_md": false, 00:24:12.545 "write_zeroes": true, 00:24:12.545 "zcopy": false, 00:24:12.545 "get_zone_info": false, 00:24:12.545 "zone_management": false, 00:24:12.545 "zone_append": false, 00:24:12.545 "compare": false, 00:24:12.545 "compare_and_write": false, 00:24:12.545 "abort": false, 00:24:12.545 "seek_hole": false, 00:24:12.545 "seek_data": false, 00:24:12.545 "copy": false, 00:24:12.545 "nvme_iov_md": false 00:24:12.545 }, 00:24:12.545 "memory_domains": [ 00:24:12.545 { 00:24:12.545 "dma_device_id": "system", 00:24:12.545 "dma_device_type": 1 00:24:12.545 }, 00:24:12.545 { 00:24:12.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.545 "dma_device_type": 2 00:24:12.545 }, 00:24:12.545 { 00:24:12.545 "dma_device_id": "system", 00:24:12.545 "dma_device_type": 1 00:24:12.545 }, 00:24:12.545 { 00:24:12.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.545 "dma_device_type": 2 00:24:12.545 } 00:24:12.545 ], 00:24:12.545 "driver_specific": { 00:24:12.545 "raid": { 00:24:12.545 "uuid": "3bd0d4bc-8b02-4b54-8681-10990c8740f8", 00:24:12.545 "strip_size_kb": 64, 00:24:12.545 "state": "online", 00:24:12.545 "raid_level": "raid0", 00:24:12.545 "superblock": true, 00:24:12.545 "num_base_bdevs": 2, 00:24:12.545 "num_base_bdevs_discovered": 2, 00:24:12.545 "num_base_bdevs_operational": 2, 00:24:12.545 "base_bdevs_list": [ 00:24:12.545 { 00:24:12.545 "name": "pt1", 00:24:12.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:12.545 "is_configured": true, 00:24:12.545 "data_offset": 2048, 00:24:12.545 "data_size": 63488 00:24:12.545 }, 00:24:12.545 { 00:24:12.545 "name": "pt2", 00:24:12.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:12.545 "is_configured": true, 00:24:12.545 "data_offset": 2048, 00:24:12.545 "data_size": 63488 00:24:12.545 } 00:24:12.545 ] 00:24:12.545 } 00:24:12.545 } 00:24:12.545 }' 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:12.545 pt2' 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.545 [2024-10-28 13:35:26.682584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:12.545 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3bd0d4bc-8b02-4b54-8681-10990c8740f8 '!=' 3bd0d4bc-8b02-4b54-8681-10990c8740f8 ']' 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74135 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74135 ']' 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74135 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74135 00:24:12.804 killing process with pid 74135 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74135' 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74135 00:24:12.804 [2024-10-28 13:35:26.767739] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:12.804 13:35:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74135 00:24:12.804 [2024-10-28 13:35:26.767860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:12.804 [2024-10-28 13:35:26.767931] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:12.804 [2024-10-28 13:35:26.767961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:12.804 [2024-10-28 13:35:26.796177] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:13.062 13:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:24:13.062 00:24:13.062 real 0m3.883s 00:24:13.062 user 0m6.256s 00:24:13.062 sys 0m0.681s 00:24:13.062 13:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:13.062 13:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.062 ************************************ 00:24:13.062 END TEST raid_superblock_test 00:24:13.062 ************************************ 00:24:13.062 13:35:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:24:13.062 13:35:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:13.062 13:35:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:13.062 13:35:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:13.062 ************************************ 00:24:13.062 START TEST raid_read_error_test 00:24:13.062 ************************************ 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aYbaCjfzF6 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74340 00:24:13.062 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:13.063 13:35:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74340 00:24:13.063 13:35:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74340 ']' 00:24:13.063 13:35:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.063 13:35:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:13.063 13:35:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.063 13:35:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:13.063 13:35:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.063 [2024-10-28 13:35:27.203162] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:24:13.063 [2024-10-28 13:35:27.203615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74340 ] 00:24:13.321 [2024-10-28 13:35:27.358039] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:13.321 [2024-10-28 13:35:27.392937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.321 [2024-10-28 13:35:27.452898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.580 [2024-10-28 13:35:27.516791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:13.580 [2024-10-28 13:35:27.516845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.146 BaseBdev1_malloc 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.146 true 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.146 [2024-10-28 13:35:28.284279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:14.146 [2024-10-28 13:35:28.284520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.146 [2024-10-28 13:35:28.284576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:14.146 [2024-10-28 13:35:28.284603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.146 [2024-10-28 13:35:28.287707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.146 [2024-10-28 13:35:28.287890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:14.146 BaseBdev1 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.146 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.405 BaseBdev2_malloc 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.405 true 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.405 [2024-10-28 13:35:28.321447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:14.405 [2024-10-28 13:35:28.321528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.405 [2024-10-28 13:35:28.321556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:14.405 [2024-10-28 13:35:28.321583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.405 [2024-10-28 13:35:28.324699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.405 [2024-10-28 13:35:28.324754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:14.405 BaseBdev2 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.405 [2024-10-28 13:35:28.329662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:14.405 [2024-10-28 13:35:28.332588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:14.405 [2024-10-28 13:35:28.332981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:14.405 [2024-10-28 13:35:28.333123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:14.405 [2024-10-28 13:35:28.333605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:14.405 [2024-10-28 13:35:28.333958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:14.405 [2024-10-28 13:35:28.334098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:14.405 [2024-10-28 13:35:28.334387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.405 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:14.405 "name": "raid_bdev1", 00:24:14.405 "uuid": "1521fb27-67f3-46b0-b4e1-fc96776e5461", 00:24:14.405 "strip_size_kb": 64, 00:24:14.405 "state": "online", 00:24:14.405 "raid_level": "raid0", 00:24:14.405 "superblock": true, 00:24:14.405 "num_base_bdevs": 2, 00:24:14.405 "num_base_bdevs_discovered": 2, 00:24:14.405 "num_base_bdevs_operational": 2, 00:24:14.405 "base_bdevs_list": [ 00:24:14.405 { 00:24:14.405 "name": "BaseBdev1", 00:24:14.406 "uuid": "ba5daa14-de10-560b-85f0-f685231e67ba", 00:24:14.406 "is_configured": true, 00:24:14.406 "data_offset": 2048, 00:24:14.406 "data_size": 63488 00:24:14.406 }, 00:24:14.406 { 00:24:14.406 "name": "BaseBdev2", 00:24:14.406 "uuid": "f3ca09ad-52b8-5cb2-8400-6d87d9a28c66", 00:24:14.406 "is_configured": true, 00:24:14.406 "data_offset": 2048, 00:24:14.406 "data_size": 63488 00:24:14.406 } 00:24:14.406 ] 00:24:14.406 }' 00:24:14.406 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:14.406 13:35:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.995 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:14.995 13:35:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:14.995 [2024-10-28 13:35:29.015092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:15.930 "name": "raid_bdev1", 00:24:15.930 "uuid": "1521fb27-67f3-46b0-b4e1-fc96776e5461", 00:24:15.930 "strip_size_kb": 64, 00:24:15.930 "state": "online", 00:24:15.930 "raid_level": "raid0", 00:24:15.930 "superblock": true, 00:24:15.930 "num_base_bdevs": 2, 00:24:15.930 "num_base_bdevs_discovered": 2, 00:24:15.930 "num_base_bdevs_operational": 2, 00:24:15.930 "base_bdevs_list": [ 00:24:15.930 { 00:24:15.930 "name": "BaseBdev1", 00:24:15.930 "uuid": "ba5daa14-de10-560b-85f0-f685231e67ba", 00:24:15.930 "is_configured": true, 00:24:15.930 "data_offset": 2048, 00:24:15.930 "data_size": 63488 00:24:15.930 }, 00:24:15.930 { 00:24:15.930 "name": "BaseBdev2", 00:24:15.930 "uuid": "f3ca09ad-52b8-5cb2-8400-6d87d9a28c66", 00:24:15.930 "is_configured": true, 00:24:15.930 "data_offset": 2048, 00:24:15.930 "data_size": 63488 00:24:15.930 } 00:24:15.930 ] 00:24:15.930 }' 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:15.930 13:35:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.498 [2024-10-28 13:35:30.397896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:16.498 [2024-10-28 13:35:30.398095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:16.498 [2024-10-28 13:35:30.401814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:16.498 { 00:24:16.498 "results": [ 00:24:16.498 { 00:24:16.498 "job": "raid_bdev1", 00:24:16.498 "core_mask": "0x1", 00:24:16.498 "workload": "randrw", 00:24:16.498 "percentage": 50, 00:24:16.498 "status": "finished", 00:24:16.498 "queue_depth": 1, 00:24:16.498 "io_size": 131072, 00:24:16.498 "runtime": 1.380611, 00:24:16.498 "iops": 10745.966821936085, 00:24:16.498 "mibps": 1343.2458527420106, 00:24:16.498 "io_failed": 1, 00:24:16.498 "io_timeout": 0, 00:24:16.498 "avg_latency_us": 130.46325120858788, 00:24:16.498 "min_latency_us": 42.82181818181818, 00:24:16.498 "max_latency_us": 1854.370909090909 00:24:16.498 } 00:24:16.498 ], 00:24:16.498 "core_count": 1 00:24:16.498 } 00:24:16.498 [2024-10-28 13:35:30.402070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.498 [2024-10-28 13:35:30.402131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:16.498 [2024-10-28 13:35:30.402192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74340 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74340 ']' 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74340 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74340 00:24:16.498 killing process with pid 74340 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74340' 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74340 00:24:16.498 [2024-10-28 13:35:30.446515] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:16.498 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74340 00:24:16.498 [2024-10-28 13:35:30.465732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:16.757 13:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aYbaCjfzF6 00:24:16.757 13:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:16.757 13:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:16.758 13:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:24:16.758 13:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:24:16.758 ************************************ 00:24:16.758 END TEST raid_read_error_test 00:24:16.758 ************************************ 00:24:16.758 13:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:16.758 13:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:16.758 13:35:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:24:16.758 00:24:16.758 real 0m3.635s 00:24:16.758 user 0m4.910s 00:24:16.758 sys 0m0.536s 00:24:16.758 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:16.758 13:35:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.758 13:35:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:24:16.758 13:35:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:16.758 13:35:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:16.758 13:35:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:16.758 ************************************ 00:24:16.758 START TEST raid_write_error_test 00:24:16.758 ************************************ 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Gds4QrjnMR 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74476 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74476 00:24:16.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74476 ']' 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.758 13:35:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.758 [2024-10-28 13:35:30.889773] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:24:16.758 [2024-10-28 13:35:30.889948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74476 ] 00:24:17.017 [2024-10-28 13:35:31.038604] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:17.017 [2024-10-28 13:35:31.075826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.017 [2024-10-28 13:35:31.134634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.275 [2024-10-28 13:35:31.197535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:17.275 [2024-10-28 13:35:31.197599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:18.211 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.211 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:24:18.211 13:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:18.211 13:35:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:18.211 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.211 13:35:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.211 BaseBdev1_malloc 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.211 true 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.211 [2024-10-28 13:35:32.032001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:18.211 [2024-10-28 13:35:32.032095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:18.211 [2024-10-28 13:35:32.032131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:18.211 [2024-10-28 13:35:32.032171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:18.211 [2024-10-28 13:35:32.035402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:18.211 [2024-10-28 13:35:32.035482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:18.211 BaseBdev1 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.211 BaseBdev2_malloc 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.211 true 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.211 [2024-10-28 13:35:32.064528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:18.211 [2024-10-28 13:35:32.064817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:18.211 [2024-10-28 13:35:32.064855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:18.211 [2024-10-28 13:35:32.064874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:18.211 [2024-10-28 13:35:32.067915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:18.211 [2024-10-28 13:35:32.068115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:18.211 BaseBdev2 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.211 [2024-10-28 13:35:32.072753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:18.211 [2024-10-28 13:35:32.075303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:18.211 [2024-10-28 13:35:32.075540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:18.211 [2024-10-28 13:35:32.075581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:18.211 [2024-10-28 13:35:32.075917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:18.211 [2024-10-28 13:35:32.076121] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:18.211 [2024-10-28 13:35:32.076161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:18.211 [2024-10-28 13:35:32.076346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.211 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.212 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.212 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:18.212 "name": "raid_bdev1", 00:24:18.212 "uuid": "9d7e2c21-f868-405e-a74d-98c4bbb84316", 00:24:18.212 "strip_size_kb": 64, 00:24:18.212 "state": "online", 00:24:18.212 "raid_level": "raid0", 00:24:18.212 "superblock": true, 00:24:18.212 "num_base_bdevs": 2, 00:24:18.212 "num_base_bdevs_discovered": 2, 00:24:18.212 "num_base_bdevs_operational": 2, 00:24:18.212 "base_bdevs_list": [ 00:24:18.212 { 00:24:18.212 "name": "BaseBdev1", 00:24:18.212 "uuid": "5cd00eed-cbef-54a8-9616-1415a9d75069", 00:24:18.212 "is_configured": true, 00:24:18.212 "data_offset": 2048, 00:24:18.212 "data_size": 63488 00:24:18.212 }, 00:24:18.212 { 00:24:18.212 "name": "BaseBdev2", 00:24:18.212 "uuid": "ddf43448-6e7e-5af9-bc88-5272450bcf15", 00:24:18.212 "is_configured": true, 00:24:18.212 "data_offset": 2048, 00:24:18.212 "data_size": 63488 00:24:18.212 } 00:24:18.212 ] 00:24:18.212 }' 00:24:18.212 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:18.212 13:35:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.470 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:18.470 13:35:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:18.729 [2024-10-28 13:35:32.753548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:19.662 "name": "raid_bdev1", 00:24:19.662 "uuid": "9d7e2c21-f868-405e-a74d-98c4bbb84316", 00:24:19.662 "strip_size_kb": 64, 00:24:19.662 "state": "online", 00:24:19.662 "raid_level": "raid0", 00:24:19.662 "superblock": true, 00:24:19.662 "num_base_bdevs": 2, 00:24:19.662 "num_base_bdevs_discovered": 2, 00:24:19.662 "num_base_bdevs_operational": 2, 00:24:19.662 "base_bdevs_list": [ 00:24:19.662 { 00:24:19.662 "name": "BaseBdev1", 00:24:19.662 "uuid": "5cd00eed-cbef-54a8-9616-1415a9d75069", 00:24:19.662 "is_configured": true, 00:24:19.662 "data_offset": 2048, 00:24:19.662 "data_size": 63488 00:24:19.662 }, 00:24:19.662 { 00:24:19.662 "name": "BaseBdev2", 00:24:19.662 "uuid": "ddf43448-6e7e-5af9-bc88-5272450bcf15", 00:24:19.662 "is_configured": true, 00:24:19.662 "data_offset": 2048, 00:24:19.662 "data_size": 63488 00:24:19.662 } 00:24:19.662 ] 00:24:19.662 }' 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:19.662 13:35:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.229 [2024-10-28 13:35:34.155800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:20.229 [2024-10-28 13:35:34.155861] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:20.229 [2024-10-28 13:35:34.159251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:20.229 [2024-10-28 13:35:34.159326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:20.229 [2024-10-28 13:35:34.159375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:20.229 [2024-10-28 13:35:34.159395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:20.229 { 00:24:20.229 "results": [ 00:24:20.229 { 00:24:20.229 "job": "raid_bdev1", 00:24:20.229 "core_mask": "0x1", 00:24:20.229 "workload": "randrw", 00:24:20.229 "percentage": 50, 00:24:20.229 "status": "finished", 00:24:20.229 "queue_depth": 1, 00:24:20.229 "io_size": 131072, 00:24:20.229 "runtime": 1.399439, 00:24:20.229 "iops": 9587.413242020553, 00:24:20.229 "mibps": 1198.4266552525692, 00:24:20.229 "io_failed": 1, 00:24:20.229 "io_timeout": 0, 00:24:20.229 "avg_latency_us": 145.70235369042942, 00:24:20.229 "min_latency_us": 42.123636363636365, 00:24:20.229 "max_latency_us": 1884.16 00:24:20.229 } 00:24:20.229 ], 00:24:20.229 "core_count": 1 00:24:20.229 } 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74476 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74476 ']' 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74476 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74476 00:24:20.229 killing process with pid 74476 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74476' 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74476 00:24:20.229 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74476 00:24:20.229 [2024-10-28 13:35:34.197909] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:20.229 [2024-10-28 13:35:34.219150] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:20.487 13:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Gds4QrjnMR 00:24:20.487 13:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:20.487 13:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:20.487 13:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:24:20.487 13:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:24:20.487 13:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:20.487 13:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:20.487 13:35:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:24:20.487 00:24:20.487 real 0m3.701s 00:24:20.487 user 0m5.048s 00:24:20.487 sys 0m0.536s 00:24:20.487 ************************************ 00:24:20.487 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.487 13:35:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.487 END TEST raid_write_error_test 00:24:20.487 ************************************ 00:24:20.487 13:35:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:24:20.487 13:35:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:24:20.487 13:35:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:20.487 13:35:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:20.487 13:35:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:20.487 ************************************ 00:24:20.487 START TEST raid_state_function_test 00:24:20.487 ************************************ 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74608 00:24:20.487 Process raid pid: 74608 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74608' 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74608 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74608 ']' 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.487 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:20.488 13:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:20.488 [2024-10-28 13:35:34.644438] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:24:20.488 [2024-10-28 13:35:34.644629] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.746 [2024-10-28 13:35:34.802727] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:20.746 [2024-10-28 13:35:34.837898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.746 [2024-10-28 13:35:34.898909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.004 [2024-10-28 13:35:34.961987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:21.004 [2024-10-28 13:35:34.962054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.569 [2024-10-28 13:35:35.712067] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:21.569 [2024-10-28 13:35:35.712132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:21.569 [2024-10-28 13:35:35.712180] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:21.569 [2024-10-28 13:35:35.712195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:21.569 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:21.570 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.570 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.570 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.570 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:21.828 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.828 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:21.828 "name": "Existed_Raid", 00:24:21.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.828 "strip_size_kb": 64, 00:24:21.828 "state": "configuring", 00:24:21.828 "raid_level": "concat", 00:24:21.828 "superblock": false, 00:24:21.828 "num_base_bdevs": 2, 00:24:21.828 "num_base_bdevs_discovered": 0, 00:24:21.828 "num_base_bdevs_operational": 2, 00:24:21.828 "base_bdevs_list": [ 00:24:21.828 { 00:24:21.828 "name": "BaseBdev1", 00:24:21.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.828 "is_configured": false, 00:24:21.828 "data_offset": 0, 00:24:21.828 "data_size": 0 00:24:21.828 }, 00:24:21.828 { 00:24:21.828 "name": "BaseBdev2", 00:24:21.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.828 "is_configured": false, 00:24:21.828 "data_offset": 0, 00:24:21.828 "data_size": 0 00:24:21.828 } 00:24:21.828 ] 00:24:21.828 }' 00:24:21.828 13:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:21.828 13:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.087 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:22.087 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.087 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.087 [2024-10-28 13:35:36.228022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:22.087 [2024-10-28 13:35:36.228099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:24:22.087 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.087 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:22.087 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.087 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.087 [2024-10-28 13:35:36.236042] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:22.087 [2024-10-28 13:35:36.236103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:22.087 [2024-10-28 13:35:36.236121] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:22.087 [2024-10-28 13:35:36.236152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:22.087 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.087 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:22.087 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.087 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.345 [2024-10-28 13:35:36.256682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:22.345 BaseBdev1 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.346 [ 00:24:22.346 { 00:24:22.346 "name": "BaseBdev1", 00:24:22.346 "aliases": [ 00:24:22.346 "157ac1e6-835d-470a-9267-7e85a70e475d" 00:24:22.346 ], 00:24:22.346 "product_name": "Malloc disk", 00:24:22.346 "block_size": 512, 00:24:22.346 "num_blocks": 65536, 00:24:22.346 "uuid": "157ac1e6-835d-470a-9267-7e85a70e475d", 00:24:22.346 "assigned_rate_limits": { 00:24:22.346 "rw_ios_per_sec": 0, 00:24:22.346 "rw_mbytes_per_sec": 0, 00:24:22.346 "r_mbytes_per_sec": 0, 00:24:22.346 "w_mbytes_per_sec": 0 00:24:22.346 }, 00:24:22.346 "claimed": true, 00:24:22.346 "claim_type": "exclusive_write", 00:24:22.346 "zoned": false, 00:24:22.346 "supported_io_types": { 00:24:22.346 "read": true, 00:24:22.346 "write": true, 00:24:22.346 "unmap": true, 00:24:22.346 "flush": true, 00:24:22.346 "reset": true, 00:24:22.346 "nvme_admin": false, 00:24:22.346 "nvme_io": false, 00:24:22.346 "nvme_io_md": false, 00:24:22.346 "write_zeroes": true, 00:24:22.346 "zcopy": true, 00:24:22.346 "get_zone_info": false, 00:24:22.346 "zone_management": false, 00:24:22.346 "zone_append": false, 00:24:22.346 "compare": false, 00:24:22.346 "compare_and_write": false, 00:24:22.346 "abort": true, 00:24:22.346 "seek_hole": false, 00:24:22.346 "seek_data": false, 00:24:22.346 "copy": true, 00:24:22.346 "nvme_iov_md": false 00:24:22.346 }, 00:24:22.346 "memory_domains": [ 00:24:22.346 { 00:24:22.346 "dma_device_id": "system", 00:24:22.346 "dma_device_type": 1 00:24:22.346 }, 00:24:22.346 { 00:24:22.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.346 "dma_device_type": 2 00:24:22.346 } 00:24:22.346 ], 00:24:22.346 "driver_specific": {} 00:24:22.346 } 00:24:22.346 ] 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:22.346 "name": "Existed_Raid", 00:24:22.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.346 "strip_size_kb": 64, 00:24:22.346 "state": "configuring", 00:24:22.346 "raid_level": "concat", 00:24:22.346 "superblock": false, 00:24:22.346 "num_base_bdevs": 2, 00:24:22.346 "num_base_bdevs_discovered": 1, 00:24:22.346 "num_base_bdevs_operational": 2, 00:24:22.346 "base_bdevs_list": [ 00:24:22.346 { 00:24:22.346 "name": "BaseBdev1", 00:24:22.346 "uuid": "157ac1e6-835d-470a-9267-7e85a70e475d", 00:24:22.346 "is_configured": true, 00:24:22.346 "data_offset": 0, 00:24:22.346 "data_size": 65536 00:24:22.346 }, 00:24:22.346 { 00:24:22.346 "name": "BaseBdev2", 00:24:22.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.346 "is_configured": false, 00:24:22.346 "data_offset": 0, 00:24:22.346 "data_size": 0 00:24:22.346 } 00:24:22.346 ] 00:24:22.346 }' 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:22.346 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.913 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:22.913 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.913 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.913 [2024-10-28 13:35:36.812893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:22.913 [2024-10-28 13:35:36.812977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:22.913 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.914 [2024-10-28 13:35:36.820904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:22.914 [2024-10-28 13:35:36.823450] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:22.914 [2024-10-28 13:35:36.823501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:22.914 "name": "Existed_Raid", 00:24:22.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.914 "strip_size_kb": 64, 00:24:22.914 "state": "configuring", 00:24:22.914 "raid_level": "concat", 00:24:22.914 "superblock": false, 00:24:22.914 "num_base_bdevs": 2, 00:24:22.914 "num_base_bdevs_discovered": 1, 00:24:22.914 "num_base_bdevs_operational": 2, 00:24:22.914 "base_bdevs_list": [ 00:24:22.914 { 00:24:22.914 "name": "BaseBdev1", 00:24:22.914 "uuid": "157ac1e6-835d-470a-9267-7e85a70e475d", 00:24:22.914 "is_configured": true, 00:24:22.914 "data_offset": 0, 00:24:22.914 "data_size": 65536 00:24:22.914 }, 00:24:22.914 { 00:24:22.914 "name": "BaseBdev2", 00:24:22.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.914 "is_configured": false, 00:24:22.914 "data_offset": 0, 00:24:22.914 "data_size": 0 00:24:22.914 } 00:24:22.914 ] 00:24:22.914 }' 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:22.914 13:35:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.481 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:23.481 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.481 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.481 [2024-10-28 13:35:37.378466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:23.481 [2024-10-28 13:35:37.378522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:23.481 [2024-10-28 13:35:37.378540] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:23.481 [2024-10-28 13:35:37.378857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:23.481 [2024-10-28 13:35:37.379057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:23.481 [2024-10-28 13:35:37.379073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:24:23.481 BaseBdev2 00:24:23.481 [2024-10-28 13:35:37.379360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.481 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.481 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:23.481 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:23.481 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:23.481 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:24:23.481 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:23.481 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:23.481 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:24:23.481 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.481 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.482 [ 00:24:23.482 { 00:24:23.482 "name": "BaseBdev2", 00:24:23.482 "aliases": [ 00:24:23.482 "67b0296c-9af7-415f-a6bf-843085fed469" 00:24:23.482 ], 00:24:23.482 "product_name": "Malloc disk", 00:24:23.482 "block_size": 512, 00:24:23.482 "num_blocks": 65536, 00:24:23.482 "uuid": "67b0296c-9af7-415f-a6bf-843085fed469", 00:24:23.482 "assigned_rate_limits": { 00:24:23.482 "rw_ios_per_sec": 0, 00:24:23.482 "rw_mbytes_per_sec": 0, 00:24:23.482 "r_mbytes_per_sec": 0, 00:24:23.482 "w_mbytes_per_sec": 0 00:24:23.482 }, 00:24:23.482 "claimed": true, 00:24:23.482 "claim_type": "exclusive_write", 00:24:23.482 "zoned": false, 00:24:23.482 "supported_io_types": { 00:24:23.482 "read": true, 00:24:23.482 "write": true, 00:24:23.482 "unmap": true, 00:24:23.482 "flush": true, 00:24:23.482 "reset": true, 00:24:23.482 "nvme_admin": false, 00:24:23.482 "nvme_io": false, 00:24:23.482 "nvme_io_md": false, 00:24:23.482 "write_zeroes": true, 00:24:23.482 "zcopy": true, 00:24:23.482 "get_zone_info": false, 00:24:23.482 "zone_management": false, 00:24:23.482 "zone_append": false, 00:24:23.482 "compare": false, 00:24:23.482 "compare_and_write": false, 00:24:23.482 "abort": true, 00:24:23.482 "seek_hole": false, 00:24:23.482 "seek_data": false, 00:24:23.482 "copy": true, 00:24:23.482 "nvme_iov_md": false 00:24:23.482 }, 00:24:23.482 "memory_domains": [ 00:24:23.482 { 00:24:23.482 "dma_device_id": "system", 00:24:23.482 "dma_device_type": 1 00:24:23.482 }, 00:24:23.482 { 00:24:23.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.482 "dma_device_type": 2 00:24:23.482 } 00:24:23.482 ], 00:24:23.482 "driver_specific": {} 00:24:23.482 } 00:24:23.482 ] 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:23.482 "name": "Existed_Raid", 00:24:23.482 "uuid": "c46a2acd-6b8f-4718-b5d3-4007fbd32632", 00:24:23.482 "strip_size_kb": 64, 00:24:23.482 "state": "online", 00:24:23.482 "raid_level": "concat", 00:24:23.482 "superblock": false, 00:24:23.482 "num_base_bdevs": 2, 00:24:23.482 "num_base_bdevs_discovered": 2, 00:24:23.482 "num_base_bdevs_operational": 2, 00:24:23.482 "base_bdevs_list": [ 00:24:23.482 { 00:24:23.482 "name": "BaseBdev1", 00:24:23.482 "uuid": "157ac1e6-835d-470a-9267-7e85a70e475d", 00:24:23.482 "is_configured": true, 00:24:23.482 "data_offset": 0, 00:24:23.482 "data_size": 65536 00:24:23.482 }, 00:24:23.482 { 00:24:23.482 "name": "BaseBdev2", 00:24:23.482 "uuid": "67b0296c-9af7-415f-a6bf-843085fed469", 00:24:23.482 "is_configured": true, 00:24:23.482 "data_offset": 0, 00:24:23.482 "data_size": 65536 00:24:23.482 } 00:24:23.482 ] 00:24:23.482 }' 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:23.482 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.048 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:24.048 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:24.048 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:24.048 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:24.048 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:24.048 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:24.048 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:24.048 13:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:24.048 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.048 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.048 [2024-10-28 13:35:37.955104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:24.048 13:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.048 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:24.048 "name": "Existed_Raid", 00:24:24.048 "aliases": [ 00:24:24.048 "c46a2acd-6b8f-4718-b5d3-4007fbd32632" 00:24:24.048 ], 00:24:24.048 "product_name": "Raid Volume", 00:24:24.048 "block_size": 512, 00:24:24.048 "num_blocks": 131072, 00:24:24.048 "uuid": "c46a2acd-6b8f-4718-b5d3-4007fbd32632", 00:24:24.048 "assigned_rate_limits": { 00:24:24.048 "rw_ios_per_sec": 0, 00:24:24.048 "rw_mbytes_per_sec": 0, 00:24:24.048 "r_mbytes_per_sec": 0, 00:24:24.048 "w_mbytes_per_sec": 0 00:24:24.048 }, 00:24:24.048 "claimed": false, 00:24:24.048 "zoned": false, 00:24:24.048 "supported_io_types": { 00:24:24.048 "read": true, 00:24:24.048 "write": true, 00:24:24.048 "unmap": true, 00:24:24.048 "flush": true, 00:24:24.048 "reset": true, 00:24:24.048 "nvme_admin": false, 00:24:24.048 "nvme_io": false, 00:24:24.048 "nvme_io_md": false, 00:24:24.048 "write_zeroes": true, 00:24:24.048 "zcopy": false, 00:24:24.048 "get_zone_info": false, 00:24:24.048 "zone_management": false, 00:24:24.048 "zone_append": false, 00:24:24.048 "compare": false, 00:24:24.048 "compare_and_write": false, 00:24:24.048 "abort": false, 00:24:24.049 "seek_hole": false, 00:24:24.049 "seek_data": false, 00:24:24.049 "copy": false, 00:24:24.049 "nvme_iov_md": false 00:24:24.049 }, 00:24:24.049 "memory_domains": [ 00:24:24.049 { 00:24:24.049 "dma_device_id": "system", 00:24:24.049 "dma_device_type": 1 00:24:24.049 }, 00:24:24.049 { 00:24:24.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.049 "dma_device_type": 2 00:24:24.049 }, 00:24:24.049 { 00:24:24.049 "dma_device_id": "system", 00:24:24.049 "dma_device_type": 1 00:24:24.049 }, 00:24:24.049 { 00:24:24.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.049 "dma_device_type": 2 00:24:24.049 } 00:24:24.049 ], 00:24:24.049 "driver_specific": { 00:24:24.049 "raid": { 00:24:24.049 "uuid": "c46a2acd-6b8f-4718-b5d3-4007fbd32632", 00:24:24.049 "strip_size_kb": 64, 00:24:24.049 "state": "online", 00:24:24.049 "raid_level": "concat", 00:24:24.049 "superblock": false, 00:24:24.049 "num_base_bdevs": 2, 00:24:24.049 "num_base_bdevs_discovered": 2, 00:24:24.049 "num_base_bdevs_operational": 2, 00:24:24.049 "base_bdevs_list": [ 00:24:24.049 { 00:24:24.049 "name": "BaseBdev1", 00:24:24.049 "uuid": "157ac1e6-835d-470a-9267-7e85a70e475d", 00:24:24.049 "is_configured": true, 00:24:24.049 "data_offset": 0, 00:24:24.049 "data_size": 65536 00:24:24.049 }, 00:24:24.049 { 00:24:24.049 "name": "BaseBdev2", 00:24:24.049 "uuid": "67b0296c-9af7-415f-a6bf-843085fed469", 00:24:24.049 "is_configured": true, 00:24:24.049 "data_offset": 0, 00:24:24.049 "data_size": 65536 00:24:24.049 } 00:24:24.049 ] 00:24:24.049 } 00:24:24.049 } 00:24:24.049 }' 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:24.049 BaseBdev2' 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.049 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.307 [2024-10-28 13:35:38.246907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:24.307 [2024-10-28 13:35:38.246981] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:24.307 [2024-10-28 13:35:38.247067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:24.307 "name": "Existed_Raid", 00:24:24.307 "uuid": "c46a2acd-6b8f-4718-b5d3-4007fbd32632", 00:24:24.307 "strip_size_kb": 64, 00:24:24.307 "state": "offline", 00:24:24.307 "raid_level": "concat", 00:24:24.307 "superblock": false, 00:24:24.307 "num_base_bdevs": 2, 00:24:24.307 "num_base_bdevs_discovered": 1, 00:24:24.307 "num_base_bdevs_operational": 1, 00:24:24.307 "base_bdevs_list": [ 00:24:24.307 { 00:24:24.307 "name": null, 00:24:24.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.307 "is_configured": false, 00:24:24.307 "data_offset": 0, 00:24:24.307 "data_size": 65536 00:24:24.307 }, 00:24:24.307 { 00:24:24.307 "name": "BaseBdev2", 00:24:24.307 "uuid": "67b0296c-9af7-415f-a6bf-843085fed469", 00:24:24.307 "is_configured": true, 00:24:24.307 "data_offset": 0, 00:24:24.307 "data_size": 65536 00:24:24.307 } 00:24:24.307 ] 00:24:24.307 }' 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:24.307 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.927 [2024-10-28 13:35:38.841108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:24.927 [2024-10-28 13:35:38.841235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74608 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74608 ']' 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74608 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74608 00:24:24.927 killing process with pid 74608 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74608' 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74608 00:24:24.927 [2024-10-28 13:35:38.948810] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:24.927 13:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74608 00:24:24.927 [2024-10-28 13:35:38.950424] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:25.185 ************************************ 00:24:25.185 END TEST raid_state_function_test 00:24:25.185 ************************************ 00:24:25.185 13:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:24:25.185 00:24:25.185 real 0m4.667s 00:24:25.185 user 0m7.700s 00:24:25.185 sys 0m0.726s 00:24:25.185 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:25.185 13:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.185 13:35:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:24:25.185 13:35:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:25.185 13:35:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:25.185 13:35:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:25.185 ************************************ 00:24:25.185 START TEST raid_state_function_test_sb 00:24:25.185 ************************************ 00:24:25.185 13:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:25.186 Process raid pid: 74856 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74856 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74856' 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74856 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74856 ']' 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:25.186 13:35:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.445 [2024-10-28 13:35:39.398832] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:24:25.445 [2024-10-28 13:35:39.399021] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.445 [2024-10-28 13:35:39.561424] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:25.445 [2024-10-28 13:35:39.598577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.703 [2024-10-28 13:35:39.659453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.703 [2024-10-28 13:35:39.722903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:25.703 [2024-10-28 13:35:39.722950] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.270 [2024-10-28 13:35:40.407467] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:26.270 [2024-10-28 13:35:40.407553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:26.270 [2024-10-28 13:35:40.407587] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:26.270 [2024-10-28 13:35:40.407603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.270 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.529 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.529 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.529 "name": "Existed_Raid", 00:24:26.529 "uuid": "32f3dec4-cad4-4d38-a54b-bdf52d48872e", 00:24:26.529 "strip_size_kb": 64, 00:24:26.529 "state": "configuring", 00:24:26.529 "raid_level": "concat", 00:24:26.529 "superblock": true, 00:24:26.529 "num_base_bdevs": 2, 00:24:26.529 "num_base_bdevs_discovered": 0, 00:24:26.529 "num_base_bdevs_operational": 2, 00:24:26.529 "base_bdevs_list": [ 00:24:26.529 { 00:24:26.529 "name": "BaseBdev1", 00:24:26.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.529 "is_configured": false, 00:24:26.529 "data_offset": 0, 00:24:26.529 "data_size": 0 00:24:26.529 }, 00:24:26.529 { 00:24:26.529 "name": "BaseBdev2", 00:24:26.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.529 "is_configured": false, 00:24:26.529 "data_offset": 0, 00:24:26.529 "data_size": 0 00:24:26.529 } 00:24:26.529 ] 00:24:26.529 }' 00:24:26.529 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.529 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.097 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:27.097 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.097 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.097 [2024-10-28 13:35:40.983465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:27.097 [2024-10-28 13:35:40.983528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:24:27.097 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.097 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:27.097 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.097 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.097 [2024-10-28 13:35:40.991488] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:27.097 [2024-10-28 13:35:40.991560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:27.097 [2024-10-28 13:35:40.991584] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:27.097 [2024-10-28 13:35:40.991601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:27.097 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.097 13:35:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:27.097 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.097 13:35:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.097 [2024-10-28 13:35:41.011737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:27.097 BaseBdev1 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.097 [ 00:24:27.097 { 00:24:27.097 "name": "BaseBdev1", 00:24:27.097 "aliases": [ 00:24:27.097 "42fdbbb4-747b-4aad-ad96-9ed9493875d0" 00:24:27.097 ], 00:24:27.097 "product_name": "Malloc disk", 00:24:27.097 "block_size": 512, 00:24:27.097 "num_blocks": 65536, 00:24:27.097 "uuid": "42fdbbb4-747b-4aad-ad96-9ed9493875d0", 00:24:27.097 "assigned_rate_limits": { 00:24:27.097 "rw_ios_per_sec": 0, 00:24:27.097 "rw_mbytes_per_sec": 0, 00:24:27.097 "r_mbytes_per_sec": 0, 00:24:27.097 "w_mbytes_per_sec": 0 00:24:27.097 }, 00:24:27.097 "claimed": true, 00:24:27.097 "claim_type": "exclusive_write", 00:24:27.097 "zoned": false, 00:24:27.097 "supported_io_types": { 00:24:27.097 "read": true, 00:24:27.097 "write": true, 00:24:27.097 "unmap": true, 00:24:27.097 "flush": true, 00:24:27.097 "reset": true, 00:24:27.097 "nvme_admin": false, 00:24:27.097 "nvme_io": false, 00:24:27.097 "nvme_io_md": false, 00:24:27.097 "write_zeroes": true, 00:24:27.097 "zcopy": true, 00:24:27.097 "get_zone_info": false, 00:24:27.097 "zone_management": false, 00:24:27.097 "zone_append": false, 00:24:27.097 "compare": false, 00:24:27.097 "compare_and_write": false, 00:24:27.097 "abort": true, 00:24:27.097 "seek_hole": false, 00:24:27.097 "seek_data": false, 00:24:27.097 "copy": true, 00:24:27.097 "nvme_iov_md": false 00:24:27.097 }, 00:24:27.097 "memory_domains": [ 00:24:27.097 { 00:24:27.097 "dma_device_id": "system", 00:24:27.097 "dma_device_type": 1 00:24:27.097 }, 00:24:27.097 { 00:24:27.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:27.097 "dma_device_type": 2 00:24:27.097 } 00:24:27.097 ], 00:24:27.097 "driver_specific": {} 00:24:27.097 } 00:24:27.097 ] 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:27.097 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.098 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.098 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.098 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.098 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.098 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.098 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.098 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.098 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.098 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.098 "name": "Existed_Raid", 00:24:27.098 "uuid": "b35b5bc8-84eb-42af-8fc9-18c6f3c1e714", 00:24:27.098 "strip_size_kb": 64, 00:24:27.098 "state": "configuring", 00:24:27.098 "raid_level": "concat", 00:24:27.098 "superblock": true, 00:24:27.098 "num_base_bdevs": 2, 00:24:27.098 "num_base_bdevs_discovered": 1, 00:24:27.098 "num_base_bdevs_operational": 2, 00:24:27.098 "base_bdevs_list": [ 00:24:27.098 { 00:24:27.098 "name": "BaseBdev1", 00:24:27.098 "uuid": "42fdbbb4-747b-4aad-ad96-9ed9493875d0", 00:24:27.098 "is_configured": true, 00:24:27.098 "data_offset": 2048, 00:24:27.098 "data_size": 63488 00:24:27.098 }, 00:24:27.098 { 00:24:27.098 "name": "BaseBdev2", 00:24:27.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.098 "is_configured": false, 00:24:27.098 "data_offset": 0, 00:24:27.098 "data_size": 0 00:24:27.098 } 00:24:27.098 ] 00:24:27.098 }' 00:24:27.098 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.098 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.665 [2024-10-28 13:35:41.587979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:27.665 [2024-10-28 13:35:41.588061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.665 [2024-10-28 13:35:41.600014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:27.665 [2024-10-28 13:35:41.602659] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:27.665 [2024-10-28 13:35:41.602715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.665 "name": "Existed_Raid", 00:24:27.665 "uuid": "e0e499d1-d51b-4578-b864-b22a48cb9b1f", 00:24:27.665 "strip_size_kb": 64, 00:24:27.665 "state": "configuring", 00:24:27.665 "raid_level": "concat", 00:24:27.665 "superblock": true, 00:24:27.665 "num_base_bdevs": 2, 00:24:27.665 "num_base_bdevs_discovered": 1, 00:24:27.665 "num_base_bdevs_operational": 2, 00:24:27.665 "base_bdevs_list": [ 00:24:27.665 { 00:24:27.665 "name": "BaseBdev1", 00:24:27.665 "uuid": "42fdbbb4-747b-4aad-ad96-9ed9493875d0", 00:24:27.665 "is_configured": true, 00:24:27.665 "data_offset": 2048, 00:24:27.665 "data_size": 63488 00:24:27.665 }, 00:24:27.665 { 00:24:27.665 "name": "BaseBdev2", 00:24:27.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.665 "is_configured": false, 00:24:27.665 "data_offset": 0, 00:24:27.665 "data_size": 0 00:24:27.665 } 00:24:27.665 ] 00:24:27.665 }' 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.665 13:35:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.232 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:28.232 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.232 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.232 [2024-10-28 13:35:42.129591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:28.232 BaseBdev2 00:24:28.232 [2024-10-28 13:35:42.130105] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:28.232 [2024-10-28 13:35:42.130158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:28.233 [2024-10-28 13:35:42.130488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:28.233 [2024-10-28 13:35:42.130682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:28.233 [2024-10-28 13:35:42.130705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:24:28.233 [2024-10-28 13:35:42.130874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.233 [ 00:24:28.233 { 00:24:28.233 "name": "BaseBdev2", 00:24:28.233 "aliases": [ 00:24:28.233 "3198abf0-766a-4588-9ebb-2f30ab7f21a6" 00:24:28.233 ], 00:24:28.233 "product_name": "Malloc disk", 00:24:28.233 "block_size": 512, 00:24:28.233 "num_blocks": 65536, 00:24:28.233 "uuid": "3198abf0-766a-4588-9ebb-2f30ab7f21a6", 00:24:28.233 "assigned_rate_limits": { 00:24:28.233 "rw_ios_per_sec": 0, 00:24:28.233 "rw_mbytes_per_sec": 0, 00:24:28.233 "r_mbytes_per_sec": 0, 00:24:28.233 "w_mbytes_per_sec": 0 00:24:28.233 }, 00:24:28.233 "claimed": true, 00:24:28.233 "claim_type": "exclusive_write", 00:24:28.233 "zoned": false, 00:24:28.233 "supported_io_types": { 00:24:28.233 "read": true, 00:24:28.233 "write": true, 00:24:28.233 "unmap": true, 00:24:28.233 "flush": true, 00:24:28.233 "reset": true, 00:24:28.233 "nvme_admin": false, 00:24:28.233 "nvme_io": false, 00:24:28.233 "nvme_io_md": false, 00:24:28.233 "write_zeroes": true, 00:24:28.233 "zcopy": true, 00:24:28.233 "get_zone_info": false, 00:24:28.233 "zone_management": false, 00:24:28.233 "zone_append": false, 00:24:28.233 "compare": false, 00:24:28.233 "compare_and_write": false, 00:24:28.233 "abort": true, 00:24:28.233 "seek_hole": false, 00:24:28.233 "seek_data": false, 00:24:28.233 "copy": true, 00:24:28.233 "nvme_iov_md": false 00:24:28.233 }, 00:24:28.233 "memory_domains": [ 00:24:28.233 { 00:24:28.233 "dma_device_id": "system", 00:24:28.233 "dma_device_type": 1 00:24:28.233 }, 00:24:28.233 { 00:24:28.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.233 "dma_device_type": 2 00:24:28.233 } 00:24:28.233 ], 00:24:28.233 "driver_specific": {} 00:24:28.233 } 00:24:28.233 ] 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:28.233 "name": "Existed_Raid", 00:24:28.233 "uuid": "e0e499d1-d51b-4578-b864-b22a48cb9b1f", 00:24:28.233 "strip_size_kb": 64, 00:24:28.233 "state": "online", 00:24:28.233 "raid_level": "concat", 00:24:28.233 "superblock": true, 00:24:28.233 "num_base_bdevs": 2, 00:24:28.233 "num_base_bdevs_discovered": 2, 00:24:28.233 "num_base_bdevs_operational": 2, 00:24:28.233 "base_bdevs_list": [ 00:24:28.233 { 00:24:28.233 "name": "BaseBdev1", 00:24:28.233 "uuid": "42fdbbb4-747b-4aad-ad96-9ed9493875d0", 00:24:28.233 "is_configured": true, 00:24:28.233 "data_offset": 2048, 00:24:28.233 "data_size": 63488 00:24:28.233 }, 00:24:28.233 { 00:24:28.233 "name": "BaseBdev2", 00:24:28.233 "uuid": "3198abf0-766a-4588-9ebb-2f30ab7f21a6", 00:24:28.233 "is_configured": true, 00:24:28.233 "data_offset": 2048, 00:24:28.233 "data_size": 63488 00:24:28.233 } 00:24:28.233 ] 00:24:28.233 }' 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:28.233 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.799 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:28.799 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:28.799 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.800 [2024-10-28 13:35:42.742249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:28.800 "name": "Existed_Raid", 00:24:28.800 "aliases": [ 00:24:28.800 "e0e499d1-d51b-4578-b864-b22a48cb9b1f" 00:24:28.800 ], 00:24:28.800 "product_name": "Raid Volume", 00:24:28.800 "block_size": 512, 00:24:28.800 "num_blocks": 126976, 00:24:28.800 "uuid": "e0e499d1-d51b-4578-b864-b22a48cb9b1f", 00:24:28.800 "assigned_rate_limits": { 00:24:28.800 "rw_ios_per_sec": 0, 00:24:28.800 "rw_mbytes_per_sec": 0, 00:24:28.800 "r_mbytes_per_sec": 0, 00:24:28.800 "w_mbytes_per_sec": 0 00:24:28.800 }, 00:24:28.800 "claimed": false, 00:24:28.800 "zoned": false, 00:24:28.800 "supported_io_types": { 00:24:28.800 "read": true, 00:24:28.800 "write": true, 00:24:28.800 "unmap": true, 00:24:28.800 "flush": true, 00:24:28.800 "reset": true, 00:24:28.800 "nvme_admin": false, 00:24:28.800 "nvme_io": false, 00:24:28.800 "nvme_io_md": false, 00:24:28.800 "write_zeroes": true, 00:24:28.800 "zcopy": false, 00:24:28.800 "get_zone_info": false, 00:24:28.800 "zone_management": false, 00:24:28.800 "zone_append": false, 00:24:28.800 "compare": false, 00:24:28.800 "compare_and_write": false, 00:24:28.800 "abort": false, 00:24:28.800 "seek_hole": false, 00:24:28.800 "seek_data": false, 00:24:28.800 "copy": false, 00:24:28.800 "nvme_iov_md": false 00:24:28.800 }, 00:24:28.800 "memory_domains": [ 00:24:28.800 { 00:24:28.800 "dma_device_id": "system", 00:24:28.800 "dma_device_type": 1 00:24:28.800 }, 00:24:28.800 { 00:24:28.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.800 "dma_device_type": 2 00:24:28.800 }, 00:24:28.800 { 00:24:28.800 "dma_device_id": "system", 00:24:28.800 "dma_device_type": 1 00:24:28.800 }, 00:24:28.800 { 00:24:28.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.800 "dma_device_type": 2 00:24:28.800 } 00:24:28.800 ], 00:24:28.800 "driver_specific": { 00:24:28.800 "raid": { 00:24:28.800 "uuid": "e0e499d1-d51b-4578-b864-b22a48cb9b1f", 00:24:28.800 "strip_size_kb": 64, 00:24:28.800 "state": "online", 00:24:28.800 "raid_level": "concat", 00:24:28.800 "superblock": true, 00:24:28.800 "num_base_bdevs": 2, 00:24:28.800 "num_base_bdevs_discovered": 2, 00:24:28.800 "num_base_bdevs_operational": 2, 00:24:28.800 "base_bdevs_list": [ 00:24:28.800 { 00:24:28.800 "name": "BaseBdev1", 00:24:28.800 "uuid": "42fdbbb4-747b-4aad-ad96-9ed9493875d0", 00:24:28.800 "is_configured": true, 00:24:28.800 "data_offset": 2048, 00:24:28.800 "data_size": 63488 00:24:28.800 }, 00:24:28.800 { 00:24:28.800 "name": "BaseBdev2", 00:24:28.800 "uuid": "3198abf0-766a-4588-9ebb-2f30ab7f21a6", 00:24:28.800 "is_configured": true, 00:24:28.800 "data_offset": 2048, 00:24:28.800 "data_size": 63488 00:24:28.800 } 00:24:28.800 ] 00:24:28.800 } 00:24:28.800 } 00:24:28.800 }' 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:28.800 BaseBdev2' 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.800 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.058 13:35:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.058 [2024-10-28 13:35:43.018044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:29.058 [2024-10-28 13:35:43.018082] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:29.058 [2024-10-28 13:35:43.018186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:29.058 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:29.059 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:29.059 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:29.059 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:29.059 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:29.059 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:29.059 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:29.059 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.059 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.059 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.059 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.059 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:29.059 "name": "Existed_Raid", 00:24:29.059 "uuid": "e0e499d1-d51b-4578-b864-b22a48cb9b1f", 00:24:29.059 "strip_size_kb": 64, 00:24:29.059 "state": "offline", 00:24:29.059 "raid_level": "concat", 00:24:29.059 "superblock": true, 00:24:29.059 "num_base_bdevs": 2, 00:24:29.059 "num_base_bdevs_discovered": 1, 00:24:29.059 "num_base_bdevs_operational": 1, 00:24:29.059 "base_bdevs_list": [ 00:24:29.059 { 00:24:29.059 "name": null, 00:24:29.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.059 "is_configured": false, 00:24:29.059 "data_offset": 0, 00:24:29.059 "data_size": 63488 00:24:29.059 }, 00:24:29.059 { 00:24:29.059 "name": "BaseBdev2", 00:24:29.059 "uuid": "3198abf0-766a-4588-9ebb-2f30ab7f21a6", 00:24:29.059 "is_configured": true, 00:24:29.059 "data_offset": 2048, 00:24:29.059 "data_size": 63488 00:24:29.059 } 00:24:29.059 ] 00:24:29.059 }' 00:24:29.059 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:29.059 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.625 [2024-10-28 13:35:43.596489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:29.625 [2024-10-28 13:35:43.596865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.625 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74856 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74856 ']' 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74856 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74856 00:24:29.626 killing process with pid 74856 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74856' 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74856 00:24:29.626 [2024-10-28 13:35:43.703371] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:29.626 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74856 00:24:29.626 [2024-10-28 13:35:43.704770] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:29.884 ************************************ 00:24:29.884 END TEST raid_state_function_test_sb 00:24:29.884 ************************************ 00:24:29.884 13:35:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:24:29.884 00:24:29.884 real 0m4.688s 00:24:29.884 user 0m7.639s 00:24:29.884 sys 0m0.835s 00:24:29.884 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:29.884 13:35:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.884 13:35:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:24:29.884 13:35:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:24:29.884 13:35:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:29.884 13:35:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:29.884 ************************************ 00:24:29.884 START TEST raid_superblock_test 00:24:29.884 ************************************ 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75108 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75108 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 75108 ']' 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:29.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:29.884 13:35:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:30.142 [2024-10-28 13:35:44.114017] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:24:30.142 [2024-10-28 13:35:44.114544] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75108 ] 00:24:30.142 [2024-10-28 13:35:44.275348] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:30.401 [2024-10-28 13:35:44.311868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.401 [2024-10-28 13:35:44.371845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.401 [2024-10-28 13:35:44.434402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:30.401 [2024-10-28 13:35:44.434452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.335 malloc1 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.335 [2024-10-28 13:35:45.227145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:31.335 [2024-10-28 13:35:45.227259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.335 [2024-10-28 13:35:45.227296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:31.335 [2024-10-28 13:35:45.227316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.335 [2024-10-28 13:35:45.230328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.335 [2024-10-28 13:35:45.230384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:31.335 pt1 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.335 malloc2 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.335 [2024-10-28 13:35:45.259078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:31.335 [2024-10-28 13:35:45.259211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.335 [2024-10-28 13:35:45.259245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:31.335 [2024-10-28 13:35:45.259260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.335 [2024-10-28 13:35:45.262191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.335 [2024-10-28 13:35:45.262235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:31.335 pt2 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.335 [2024-10-28 13:35:45.271159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:31.335 [2024-10-28 13:35:45.274457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:31.335 [2024-10-28 13:35:45.274663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:24:31.335 [2024-10-28 13:35:45.274687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:31.335 [2024-10-28 13:35:45.275034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:31.335 [2024-10-28 13:35:45.275234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:24:31.335 [2024-10-28 13:35:45.275256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:24:31.335 [2024-10-28 13:35:45.275466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:31.335 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:31.336 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:31.336 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:31.336 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.336 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.336 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.336 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.336 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.336 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:31.336 "name": "raid_bdev1", 00:24:31.336 "uuid": "442ae0a4-25b0-406d-ae97-b9519298e4e5", 00:24:31.336 "strip_size_kb": 64, 00:24:31.336 "state": "online", 00:24:31.336 "raid_level": "concat", 00:24:31.336 "superblock": true, 00:24:31.336 "num_base_bdevs": 2, 00:24:31.336 "num_base_bdevs_discovered": 2, 00:24:31.336 "num_base_bdevs_operational": 2, 00:24:31.336 "base_bdevs_list": [ 00:24:31.336 { 00:24:31.336 "name": "pt1", 00:24:31.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:31.336 "is_configured": true, 00:24:31.336 "data_offset": 2048, 00:24:31.336 "data_size": 63488 00:24:31.336 }, 00:24:31.336 { 00:24:31.336 "name": "pt2", 00:24:31.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:31.336 "is_configured": true, 00:24:31.336 "data_offset": 2048, 00:24:31.336 "data_size": 63488 00:24:31.336 } 00:24:31.336 ] 00:24:31.336 }' 00:24:31.336 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:31.336 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.902 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:31.902 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:31.902 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:31.902 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:31.902 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:31.902 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:31.902 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:31.902 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:31.902 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.902 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.902 [2024-10-28 13:35:45.808052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:31.902 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:31.903 "name": "raid_bdev1", 00:24:31.903 "aliases": [ 00:24:31.903 "442ae0a4-25b0-406d-ae97-b9519298e4e5" 00:24:31.903 ], 00:24:31.903 "product_name": "Raid Volume", 00:24:31.903 "block_size": 512, 00:24:31.903 "num_blocks": 126976, 00:24:31.903 "uuid": "442ae0a4-25b0-406d-ae97-b9519298e4e5", 00:24:31.903 "assigned_rate_limits": { 00:24:31.903 "rw_ios_per_sec": 0, 00:24:31.903 "rw_mbytes_per_sec": 0, 00:24:31.903 "r_mbytes_per_sec": 0, 00:24:31.903 "w_mbytes_per_sec": 0 00:24:31.903 }, 00:24:31.903 "claimed": false, 00:24:31.903 "zoned": false, 00:24:31.903 "supported_io_types": { 00:24:31.903 "read": true, 00:24:31.903 "write": true, 00:24:31.903 "unmap": true, 00:24:31.903 "flush": true, 00:24:31.903 "reset": true, 00:24:31.903 "nvme_admin": false, 00:24:31.903 "nvme_io": false, 00:24:31.903 "nvme_io_md": false, 00:24:31.903 "write_zeroes": true, 00:24:31.903 "zcopy": false, 00:24:31.903 "get_zone_info": false, 00:24:31.903 "zone_management": false, 00:24:31.903 "zone_append": false, 00:24:31.903 "compare": false, 00:24:31.903 "compare_and_write": false, 00:24:31.903 "abort": false, 00:24:31.903 "seek_hole": false, 00:24:31.903 "seek_data": false, 00:24:31.903 "copy": false, 00:24:31.903 "nvme_iov_md": false 00:24:31.903 }, 00:24:31.903 "memory_domains": [ 00:24:31.903 { 00:24:31.903 "dma_device_id": "system", 00:24:31.903 "dma_device_type": 1 00:24:31.903 }, 00:24:31.903 { 00:24:31.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.903 "dma_device_type": 2 00:24:31.903 }, 00:24:31.903 { 00:24:31.903 "dma_device_id": "system", 00:24:31.903 "dma_device_type": 1 00:24:31.903 }, 00:24:31.903 { 00:24:31.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.903 "dma_device_type": 2 00:24:31.903 } 00:24:31.903 ], 00:24:31.903 "driver_specific": { 00:24:31.903 "raid": { 00:24:31.903 "uuid": "442ae0a4-25b0-406d-ae97-b9519298e4e5", 00:24:31.903 "strip_size_kb": 64, 00:24:31.903 "state": "online", 00:24:31.903 "raid_level": "concat", 00:24:31.903 "superblock": true, 00:24:31.903 "num_base_bdevs": 2, 00:24:31.903 "num_base_bdevs_discovered": 2, 00:24:31.903 "num_base_bdevs_operational": 2, 00:24:31.903 "base_bdevs_list": [ 00:24:31.903 { 00:24:31.903 "name": "pt1", 00:24:31.903 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:31.903 "is_configured": true, 00:24:31.903 "data_offset": 2048, 00:24:31.903 "data_size": 63488 00:24:31.903 }, 00:24:31.903 { 00:24:31.903 "name": "pt2", 00:24:31.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:31.903 "is_configured": true, 00:24:31.903 "data_offset": 2048, 00:24:31.903 "data_size": 63488 00:24:31.903 } 00:24:31.903 ] 00:24:31.903 } 00:24:31.903 } 00:24:31.903 }' 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:31.903 pt2' 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.903 13:35:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.903 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.903 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:31.903 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:31.903 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:31.903 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:31.903 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.903 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.903 [2024-10-28 13:35:46.052042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=442ae0a4-25b0-406d-ae97-b9519298e4e5 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 442ae0a4-25b0-406d-ae97-b9519298e4e5 ']' 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.162 [2024-10-28 13:35:46.099727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:32.162 [2024-10-28 13:35:46.099780] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:32.162 [2024-10-28 13:35:46.099900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:32.162 [2024-10-28 13:35:46.099991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:32.162 [2024-10-28 13:35:46.100022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.162 [2024-10-28 13:35:46.255809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:32.162 [2024-10-28 13:35:46.258452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:32.162 [2024-10-28 13:35:46.258537] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:32.162 [2024-10-28 13:35:46.258633] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:32.162 [2024-10-28 13:35:46.258664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:32.162 [2024-10-28 13:35:46.258681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:24:32.162 request: 00:24:32.162 { 00:24:32.162 "name": "raid_bdev1", 00:24:32.162 "raid_level": "concat", 00:24:32.162 "base_bdevs": [ 00:24:32.162 "malloc1", 00:24:32.162 "malloc2" 00:24:32.162 ], 00:24:32.162 "strip_size_kb": 64, 00:24:32.162 "superblock": false, 00:24:32.162 "method": "bdev_raid_create", 00:24:32.162 "req_id": 1 00:24:32.162 } 00:24:32.162 Got JSON-RPC error response 00:24:32.162 response: 00:24:32.162 { 00:24:32.162 "code": -17, 00:24:32.162 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:32.162 } 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.162 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.421 [2024-10-28 13:35:46.323789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:32.421 [2024-10-28 13:35:46.324122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.421 [2024-10-28 13:35:46.324209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:32.421 [2024-10-28 13:35:46.324333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.421 [2024-10-28 13:35:46.327354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.421 [2024-10-28 13:35:46.327517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:32.421 [2024-10-28 13:35:46.327771] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:32.421 [2024-10-28 13:35:46.327952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:32.421 pt1 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.421 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.421 "name": "raid_bdev1", 00:24:32.421 "uuid": "442ae0a4-25b0-406d-ae97-b9519298e4e5", 00:24:32.421 "strip_size_kb": 64, 00:24:32.421 "state": "configuring", 00:24:32.421 "raid_level": "concat", 00:24:32.421 "superblock": true, 00:24:32.421 "num_base_bdevs": 2, 00:24:32.421 "num_base_bdevs_discovered": 1, 00:24:32.421 "num_base_bdevs_operational": 2, 00:24:32.421 "base_bdevs_list": [ 00:24:32.421 { 00:24:32.421 "name": "pt1", 00:24:32.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:32.422 "is_configured": true, 00:24:32.422 "data_offset": 2048, 00:24:32.422 "data_size": 63488 00:24:32.422 }, 00:24:32.422 { 00:24:32.422 "name": null, 00:24:32.422 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:32.422 "is_configured": false, 00:24:32.422 "data_offset": 2048, 00:24:32.422 "data_size": 63488 00:24:32.422 } 00:24:32.422 ] 00:24:32.422 }' 00:24:32.422 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.422 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.988 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:32.988 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:32.988 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.989 [2024-10-28 13:35:46.884077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:32.989 [2024-10-28 13:35:46.884209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.989 [2024-10-28 13:35:46.884254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:32.989 [2024-10-28 13:35:46.884275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.989 [2024-10-28 13:35:46.884824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.989 [2024-10-28 13:35:46.884869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:32.989 [2024-10-28 13:35:46.884968] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:32.989 [2024-10-28 13:35:46.885005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:32.989 [2024-10-28 13:35:46.885123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:32.989 [2024-10-28 13:35:46.885159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:32.989 [2024-10-28 13:35:46.885452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:32.989 [2024-10-28 13:35:46.885606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:32.989 [2024-10-28 13:35:46.885621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:32.989 [2024-10-28 13:35:46.885783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.989 pt2 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.989 "name": "raid_bdev1", 00:24:32.989 "uuid": "442ae0a4-25b0-406d-ae97-b9519298e4e5", 00:24:32.989 "strip_size_kb": 64, 00:24:32.989 "state": "online", 00:24:32.989 "raid_level": "concat", 00:24:32.989 "superblock": true, 00:24:32.989 "num_base_bdevs": 2, 00:24:32.989 "num_base_bdevs_discovered": 2, 00:24:32.989 "num_base_bdevs_operational": 2, 00:24:32.989 "base_bdevs_list": [ 00:24:32.989 { 00:24:32.989 "name": "pt1", 00:24:32.989 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:32.989 "is_configured": true, 00:24:32.989 "data_offset": 2048, 00:24:32.989 "data_size": 63488 00:24:32.989 }, 00:24:32.989 { 00:24:32.989 "name": "pt2", 00:24:32.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:32.989 "is_configured": true, 00:24:32.989 "data_offset": 2048, 00:24:32.989 "data_size": 63488 00:24:32.989 } 00:24:32.989 ] 00:24:32.989 }' 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.989 13:35:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.247 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:33.247 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:33.247 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:33.247 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:33.247 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:33.247 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:33.506 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:33.506 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.506 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.506 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:33.506 [2024-10-28 13:35:47.412506] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:33.506 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.506 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:33.506 "name": "raid_bdev1", 00:24:33.506 "aliases": [ 00:24:33.506 "442ae0a4-25b0-406d-ae97-b9519298e4e5" 00:24:33.506 ], 00:24:33.506 "product_name": "Raid Volume", 00:24:33.506 "block_size": 512, 00:24:33.506 "num_blocks": 126976, 00:24:33.506 "uuid": "442ae0a4-25b0-406d-ae97-b9519298e4e5", 00:24:33.506 "assigned_rate_limits": { 00:24:33.506 "rw_ios_per_sec": 0, 00:24:33.506 "rw_mbytes_per_sec": 0, 00:24:33.506 "r_mbytes_per_sec": 0, 00:24:33.506 "w_mbytes_per_sec": 0 00:24:33.506 }, 00:24:33.506 "claimed": false, 00:24:33.506 "zoned": false, 00:24:33.506 "supported_io_types": { 00:24:33.506 "read": true, 00:24:33.506 "write": true, 00:24:33.506 "unmap": true, 00:24:33.506 "flush": true, 00:24:33.506 "reset": true, 00:24:33.506 "nvme_admin": false, 00:24:33.506 "nvme_io": false, 00:24:33.506 "nvme_io_md": false, 00:24:33.506 "write_zeroes": true, 00:24:33.506 "zcopy": false, 00:24:33.506 "get_zone_info": false, 00:24:33.506 "zone_management": false, 00:24:33.506 "zone_append": false, 00:24:33.506 "compare": false, 00:24:33.506 "compare_and_write": false, 00:24:33.506 "abort": false, 00:24:33.506 "seek_hole": false, 00:24:33.506 "seek_data": false, 00:24:33.506 "copy": false, 00:24:33.506 "nvme_iov_md": false 00:24:33.506 }, 00:24:33.506 "memory_domains": [ 00:24:33.506 { 00:24:33.506 "dma_device_id": "system", 00:24:33.506 "dma_device_type": 1 00:24:33.506 }, 00:24:33.506 { 00:24:33.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.506 "dma_device_type": 2 00:24:33.506 }, 00:24:33.506 { 00:24:33.506 "dma_device_id": "system", 00:24:33.506 "dma_device_type": 1 00:24:33.506 }, 00:24:33.506 { 00:24:33.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.506 "dma_device_type": 2 00:24:33.506 } 00:24:33.506 ], 00:24:33.506 "driver_specific": { 00:24:33.506 "raid": { 00:24:33.506 "uuid": "442ae0a4-25b0-406d-ae97-b9519298e4e5", 00:24:33.506 "strip_size_kb": 64, 00:24:33.506 "state": "online", 00:24:33.506 "raid_level": "concat", 00:24:33.506 "superblock": true, 00:24:33.506 "num_base_bdevs": 2, 00:24:33.506 "num_base_bdevs_discovered": 2, 00:24:33.506 "num_base_bdevs_operational": 2, 00:24:33.506 "base_bdevs_list": [ 00:24:33.506 { 00:24:33.506 "name": "pt1", 00:24:33.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:33.506 "is_configured": true, 00:24:33.506 "data_offset": 2048, 00:24:33.506 "data_size": 63488 00:24:33.506 }, 00:24:33.506 { 00:24:33.506 "name": "pt2", 00:24:33.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:33.506 "is_configured": true, 00:24:33.506 "data_offset": 2048, 00:24:33.506 "data_size": 63488 00:24:33.506 } 00:24:33.506 ] 00:24:33.506 } 00:24:33.506 } 00:24:33.506 }' 00:24:33.506 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:33.506 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:33.506 pt2' 00:24:33.506 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.506 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:33.506 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:33.506 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:33.507 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.507 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.507 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.507 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.507 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:33.507 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:33.507 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:33.507 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:33.507 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.507 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.507 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.765 [2024-10-28 13:35:47.700656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 442ae0a4-25b0-406d-ae97-b9519298e4e5 '!=' 442ae0a4-25b0-406d-ae97-b9519298e4e5 ']' 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75108 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 75108 ']' 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 75108 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75108 00:24:33.765 killing process with pid 75108 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75108' 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 75108 00:24:33.765 [2024-10-28 13:35:47.781496] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:33.765 13:35:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 75108 00:24:33.765 [2024-10-28 13:35:47.781653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.765 [2024-10-28 13:35:47.781725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.765 [2024-10-28 13:35:47.781760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:33.765 [2024-10-28 13:35:47.810767] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:34.023 13:35:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:24:34.023 00:24:34.023 real 0m4.054s 00:24:34.023 user 0m6.509s 00:24:34.023 sys 0m0.754s 00:24:34.023 13:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:34.023 ************************************ 00:24:34.023 END TEST raid_superblock_test 00:24:34.023 ************************************ 00:24:34.023 13:35:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.023 13:35:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:24:34.023 13:35:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:34.023 13:35:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:34.023 13:35:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:34.023 ************************************ 00:24:34.023 START TEST raid_read_error_test 00:24:34.023 ************************************ 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:34.023 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XnzwPHMfmQ 00:24:34.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75309 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75309 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75309 ']' 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:34.024 13:35:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.281 [2024-10-28 13:35:48.239362] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:24:34.281 [2024-10-28 13:35:48.239577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75309 ] 00:24:34.281 [2024-10-28 13:35:48.395167] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:34.281 [2024-10-28 13:35:48.424356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.538 [2024-10-28 13:35:48.476346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.538 [2024-10-28 13:35:48.532569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:34.538 [2024-10-28 13:35:48.532622] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:35.104 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:35.104 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:24:35.104 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:35.104 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:35.104 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.104 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.104 BaseBdev1_malloc 00:24:35.104 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.104 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:35.104 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.104 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.104 true 00:24:35.104 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.104 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:35.104 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.104 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.104 [2024-10-28 13:35:49.259920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:35.104 [2024-10-28 13:35:49.260272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.104 [2024-10-28 13:35:49.260316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:35.104 [2024-10-28 13:35:49.260341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.363 [2024-10-28 13:35:49.263437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.363 [2024-10-28 13:35:49.263679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:35.363 BaseBdev1 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.363 BaseBdev2_malloc 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.363 true 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.363 [2024-10-28 13:35:49.296228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:35.363 [2024-10-28 13:35:49.296318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.363 [2024-10-28 13:35:49.296348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:35.363 [2024-10-28 13:35:49.296367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.363 [2024-10-28 13:35:49.299371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.363 [2024-10-28 13:35:49.299427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:35.363 BaseBdev2 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:24:35.363 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.364 [2024-10-28 13:35:49.308373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:35.364 [2024-10-28 13:35:49.310981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:35.364 [2024-10-28 13:35:49.311254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:35.364 [2024-10-28 13:35:49.311279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:35.364 [2024-10-28 13:35:49.311664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:35.364 [2024-10-28 13:35:49.311868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:35.364 [2024-10-28 13:35:49.311894] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:35.364 [2024-10-28 13:35:49.312109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:35.364 "name": "raid_bdev1", 00:24:35.364 "uuid": "2e9ebf26-c767-40ce-bd68-437e0106c794", 00:24:35.364 "strip_size_kb": 64, 00:24:35.364 "state": "online", 00:24:35.364 "raid_level": "concat", 00:24:35.364 "superblock": true, 00:24:35.364 "num_base_bdevs": 2, 00:24:35.364 "num_base_bdevs_discovered": 2, 00:24:35.364 "num_base_bdevs_operational": 2, 00:24:35.364 "base_bdevs_list": [ 00:24:35.364 { 00:24:35.364 "name": "BaseBdev1", 00:24:35.364 "uuid": "9befaae6-33b8-5a15-8c5e-4e1e14de6b4f", 00:24:35.364 "is_configured": true, 00:24:35.364 "data_offset": 2048, 00:24:35.364 "data_size": 63488 00:24:35.364 }, 00:24:35.364 { 00:24:35.364 "name": "BaseBdev2", 00:24:35.364 "uuid": "371ff99b-7204-5197-8590-ee0a88113181", 00:24:35.364 "is_configured": true, 00:24:35.364 "data_offset": 2048, 00:24:35.364 "data_size": 63488 00:24:35.364 } 00:24:35.364 ] 00:24:35.364 }' 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:35.364 13:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.931 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:35.931 13:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:35.931 [2024-10-28 13:35:49.981627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:36.888 "name": "raid_bdev1", 00:24:36.888 "uuid": "2e9ebf26-c767-40ce-bd68-437e0106c794", 00:24:36.888 "strip_size_kb": 64, 00:24:36.888 "state": "online", 00:24:36.888 "raid_level": "concat", 00:24:36.888 "superblock": true, 00:24:36.888 "num_base_bdevs": 2, 00:24:36.888 "num_base_bdevs_discovered": 2, 00:24:36.888 "num_base_bdevs_operational": 2, 00:24:36.888 "base_bdevs_list": [ 00:24:36.888 { 00:24:36.888 "name": "BaseBdev1", 00:24:36.888 "uuid": "9befaae6-33b8-5a15-8c5e-4e1e14de6b4f", 00:24:36.888 "is_configured": true, 00:24:36.888 "data_offset": 2048, 00:24:36.888 "data_size": 63488 00:24:36.888 }, 00:24:36.888 { 00:24:36.888 "name": "BaseBdev2", 00:24:36.888 "uuid": "371ff99b-7204-5197-8590-ee0a88113181", 00:24:36.888 "is_configured": true, 00:24:36.888 "data_offset": 2048, 00:24:36.888 "data_size": 63488 00:24:36.888 } 00:24:36.888 ] 00:24:36.888 }' 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:36.888 13:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.456 [2024-10-28 13:35:51.439509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:37.456 [2024-10-28 13:35:51.439578] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:37.456 [2024-10-28 13:35:51.442829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:37.456 [2024-10-28 13:35:51.442915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:37.456 [2024-10-28 13:35:51.442961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:37.456 [2024-10-28 13:35:51.442980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:37.456 { 00:24:37.456 "results": [ 00:24:37.456 { 00:24:37.456 "job": "raid_bdev1", 00:24:37.456 "core_mask": "0x1", 00:24:37.456 "workload": "randrw", 00:24:37.456 "percentage": 50, 00:24:37.456 "status": "finished", 00:24:37.456 "queue_depth": 1, 00:24:37.456 "io_size": 131072, 00:24:37.456 "runtime": 1.455338, 00:24:37.456 "iops": 10978.892875744328, 00:24:37.456 "mibps": 1372.361609468041, 00:24:37.456 "io_failed": 1, 00:24:37.456 "io_timeout": 0, 00:24:37.456 "avg_latency_us": 126.9053924184583, 00:24:37.456 "min_latency_us": 42.35636363636364, 00:24:37.456 "max_latency_us": 1854.370909090909 00:24:37.456 } 00:24:37.456 ], 00:24:37.456 "core_count": 1 00:24:37.456 } 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75309 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75309 ']' 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75309 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75309 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:37.456 killing process with pid 75309 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75309' 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75309 00:24:37.456 [2024-10-28 13:35:51.478251] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:37.456 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75309 00:24:37.456 [2024-10-28 13:35:51.497256] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:37.715 13:35:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XnzwPHMfmQ 00:24:37.715 13:35:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:37.715 13:35:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:37.715 13:35:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:24:37.715 13:35:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:24:37.715 13:35:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:37.715 13:35:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:37.715 13:35:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:24:37.715 00:24:37.715 real 0m3.640s 00:24:37.715 user 0m4.911s 00:24:37.715 sys 0m0.527s 00:24:37.715 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:37.715 13:35:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.715 ************************************ 00:24:37.715 END TEST raid_read_error_test 00:24:37.715 ************************************ 00:24:37.715 13:35:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:24:37.715 13:35:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:37.715 13:35:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:37.715 13:35:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:37.715 ************************************ 00:24:37.715 START TEST raid_write_error_test 00:24:37.715 ************************************ 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.soeIaQlF51 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75443 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75443 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75443 ']' 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.715 13:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.974 [2024-10-28 13:35:51.987812] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:24:37.974 [2024-10-28 13:35:51.988018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75443 ] 00:24:38.232 [2024-10-28 13:35:52.144035] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:38.232 [2024-10-28 13:35:52.180322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.232 [2024-10-28 13:35:52.241232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.232 [2024-10-28 13:35:52.308707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:38.232 [2024-10-28 13:35:52.308769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:39.167 13:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:39.167 13:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:24:39.167 13:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:39.167 13:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:39.167 13:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.167 13:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.167 BaseBdev1_malloc 00:24:39.167 13:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.167 13:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:39.167 13:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.167 13:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.167 true 00:24:39.167 13:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.167 13:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:39.167 13:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.167 13:35:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.167 [2024-10-28 13:35:53.001891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:39.167 [2024-10-28 13:35:53.001979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.167 [2024-10-28 13:35:53.002012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:39.167 [2024-10-28 13:35:53.002036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.167 [2024-10-28 13:35:53.005034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.167 [2024-10-28 13:35:53.005083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:39.167 BaseBdev1 00:24:39.167 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.167 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:39.167 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:39.167 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.167 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.167 BaseBdev2_malloc 00:24:39.167 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.167 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:39.167 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.167 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.167 true 00:24:39.167 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.167 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:39.167 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.167 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.168 [2024-10-28 13:35:53.037685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:39.168 [2024-10-28 13:35:53.037766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:39.168 [2024-10-28 13:35:53.037799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:39.168 [2024-10-28 13:35:53.037817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:39.168 [2024-10-28 13:35:53.040735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:39.168 [2024-10-28 13:35:53.040786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:39.168 BaseBdev2 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.168 [2024-10-28 13:35:53.045719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:39.168 [2024-10-28 13:35:53.048335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:39.168 [2024-10-28 13:35:53.048567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:39.168 [2024-10-28 13:35:53.048590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:39.168 [2024-10-28 13:35:53.048952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:39.168 [2024-10-28 13:35:53.049180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:39.168 [2024-10-28 13:35:53.049211] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:39.168 [2024-10-28 13:35:53.049391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:39.168 "name": "raid_bdev1", 00:24:39.168 "uuid": "2f70ff86-4b31-4d13-8773-0a14fcd0e774", 00:24:39.168 "strip_size_kb": 64, 00:24:39.168 "state": "online", 00:24:39.168 "raid_level": "concat", 00:24:39.168 "superblock": true, 00:24:39.168 "num_base_bdevs": 2, 00:24:39.168 "num_base_bdevs_discovered": 2, 00:24:39.168 "num_base_bdevs_operational": 2, 00:24:39.168 "base_bdevs_list": [ 00:24:39.168 { 00:24:39.168 "name": "BaseBdev1", 00:24:39.168 "uuid": "6f663686-081e-5e07-a70c-d36cdaf875b9", 00:24:39.168 "is_configured": true, 00:24:39.168 "data_offset": 2048, 00:24:39.168 "data_size": 63488 00:24:39.168 }, 00:24:39.168 { 00:24:39.168 "name": "BaseBdev2", 00:24:39.168 "uuid": "b81dfd93-00da-5abb-9945-1b5dabb85c4d", 00:24:39.168 "is_configured": true, 00:24:39.168 "data_offset": 2048, 00:24:39.168 "data_size": 63488 00:24:39.168 } 00:24:39.168 ] 00:24:39.168 }' 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:39.168 13:35:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.731 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:39.731 13:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:39.731 [2024-10-28 13:35:53.734442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.663 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:40.663 "name": "raid_bdev1", 00:24:40.663 "uuid": "2f70ff86-4b31-4d13-8773-0a14fcd0e774", 00:24:40.663 "strip_size_kb": 64, 00:24:40.663 "state": "online", 00:24:40.663 "raid_level": "concat", 00:24:40.663 "superblock": true, 00:24:40.663 "num_base_bdevs": 2, 00:24:40.663 "num_base_bdevs_discovered": 2, 00:24:40.663 "num_base_bdevs_operational": 2, 00:24:40.663 "base_bdevs_list": [ 00:24:40.663 { 00:24:40.663 "name": "BaseBdev1", 00:24:40.663 "uuid": "6f663686-081e-5e07-a70c-d36cdaf875b9", 00:24:40.663 "is_configured": true, 00:24:40.663 "data_offset": 2048, 00:24:40.663 "data_size": 63488 00:24:40.664 }, 00:24:40.664 { 00:24:40.664 "name": "BaseBdev2", 00:24:40.664 "uuid": "b81dfd93-00da-5abb-9945-1b5dabb85c4d", 00:24:40.664 "is_configured": true, 00:24:40.664 "data_offset": 2048, 00:24:40.664 "data_size": 63488 00:24:40.664 } 00:24:40.664 ] 00:24:40.664 }' 00:24:40.664 13:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:40.664 13:35:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.230 13:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:41.231 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.231 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.231 [2024-10-28 13:35:55.108605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:41.231 [2024-10-28 13:35:55.108660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:41.231 [2024-10-28 13:35:55.111885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.231 [2024-10-28 13:35:55.111958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:41.231 [2024-10-28 13:35:55.112006] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:41.231 [2024-10-28 13:35:55.112024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:41.231 { 00:24:41.231 "results": [ 00:24:41.231 { 00:24:41.231 "job": "raid_bdev1", 00:24:41.231 "core_mask": "0x1", 00:24:41.231 "workload": "randrw", 00:24:41.231 "percentage": 50, 00:24:41.231 "status": "finished", 00:24:41.231 "queue_depth": 1, 00:24:41.231 "io_size": 131072, 00:24:41.231 "runtime": 1.371633, 00:24:41.231 "iops": 10973.780887453131, 00:24:41.231 "mibps": 1371.7226109316414, 00:24:41.231 "io_failed": 1, 00:24:41.231 "io_timeout": 0, 00:24:41.231 "avg_latency_us": 127.29459038669428, 00:24:41.231 "min_latency_us": 42.35636363636364, 00:24:41.231 "max_latency_us": 1884.16 00:24:41.231 } 00:24:41.231 ], 00:24:41.231 "core_count": 1 00:24:41.231 } 00:24:41.231 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.231 13:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75443 00:24:41.231 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75443 ']' 00:24:41.231 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75443 00:24:41.231 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:24:41.231 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:41.231 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75443 00:24:41.231 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:41.231 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:41.231 killing process with pid 75443 00:24:41.231 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75443' 00:24:41.231 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75443 00:24:41.231 [2024-10-28 13:35:55.144998] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:41.231 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75443 00:24:41.231 [2024-10-28 13:35:55.164235] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:41.490 13:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.soeIaQlF51 00:24:41.490 13:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:41.490 13:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:41.490 13:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:24:41.490 13:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:24:41.490 13:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:41.490 13:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:41.490 13:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:24:41.490 00:24:41.490 real 0m3.610s 00:24:41.490 user 0m4.810s 00:24:41.490 sys 0m0.590s 00:24:41.490 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:41.490 13:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.490 ************************************ 00:24:41.490 END TEST raid_write_error_test 00:24:41.490 ************************************ 00:24:41.490 13:35:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:24:41.490 13:35:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:24:41.490 13:35:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:41.490 13:35:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:41.490 13:35:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:41.490 ************************************ 00:24:41.490 START TEST raid_state_function_test 00:24:41.490 ************************************ 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75576 00:24:41.490 Process raid pid: 75576 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75576' 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75576 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75576 ']' 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:41.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:41.490 13:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:41.490 [2024-10-28 13:35:55.603974] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:24:41.490 [2024-10-28 13:35:55.604215] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.749 [2024-10-28 13:35:55.767291] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:41.749 [2024-10-28 13:35:55.798005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.749 [2024-10-28 13:35:55.855087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.007 [2024-10-28 13:35:55.912754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:42.007 [2024-10-28 13:35:55.912799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.572 [2024-10-28 13:35:56.683881] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:42.572 [2024-10-28 13:35:56.683967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:42.572 [2024-10-28 13:35:56.683988] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:42.572 [2024-10-28 13:35:56.684002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:42.572 13:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.830 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:42.830 "name": "Existed_Raid", 00:24:42.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.830 "strip_size_kb": 0, 00:24:42.830 "state": "configuring", 00:24:42.830 "raid_level": "raid1", 00:24:42.830 "superblock": false, 00:24:42.830 "num_base_bdevs": 2, 00:24:42.830 "num_base_bdevs_discovered": 0, 00:24:42.830 "num_base_bdevs_operational": 2, 00:24:42.830 "base_bdevs_list": [ 00:24:42.830 { 00:24:42.830 "name": "BaseBdev1", 00:24:42.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.830 "is_configured": false, 00:24:42.830 "data_offset": 0, 00:24:42.830 "data_size": 0 00:24:42.830 }, 00:24:42.830 { 00:24:42.830 "name": "BaseBdev2", 00:24:42.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.830 "is_configured": false, 00:24:42.830 "data_offset": 0, 00:24:42.830 "data_size": 0 00:24:42.830 } 00:24:42.830 ] 00:24:42.830 }' 00:24:42.830 13:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:42.830 13:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.396 [2024-10-28 13:35:57.251917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:43.396 [2024-10-28 13:35:57.251965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.396 [2024-10-28 13:35:57.259930] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:43.396 [2024-10-28 13:35:57.259981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:43.396 [2024-10-28 13:35:57.260001] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:43.396 [2024-10-28 13:35:57.260015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.396 [2024-10-28 13:35:57.280500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:43.396 BaseBdev1 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.396 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.396 [ 00:24:43.396 { 00:24:43.396 "name": "BaseBdev1", 00:24:43.396 "aliases": [ 00:24:43.396 "2dbe74c4-cf77-40e2-b71f-383f312d905a" 00:24:43.396 ], 00:24:43.396 "product_name": "Malloc disk", 00:24:43.396 "block_size": 512, 00:24:43.396 "num_blocks": 65536, 00:24:43.396 "uuid": "2dbe74c4-cf77-40e2-b71f-383f312d905a", 00:24:43.396 "assigned_rate_limits": { 00:24:43.396 "rw_ios_per_sec": 0, 00:24:43.396 "rw_mbytes_per_sec": 0, 00:24:43.396 "r_mbytes_per_sec": 0, 00:24:43.396 "w_mbytes_per_sec": 0 00:24:43.396 }, 00:24:43.396 "claimed": true, 00:24:43.396 "claim_type": "exclusive_write", 00:24:43.396 "zoned": false, 00:24:43.396 "supported_io_types": { 00:24:43.396 "read": true, 00:24:43.396 "write": true, 00:24:43.396 "unmap": true, 00:24:43.396 "flush": true, 00:24:43.396 "reset": true, 00:24:43.396 "nvme_admin": false, 00:24:43.396 "nvme_io": false, 00:24:43.396 "nvme_io_md": false, 00:24:43.396 "write_zeroes": true, 00:24:43.396 "zcopy": true, 00:24:43.396 "get_zone_info": false, 00:24:43.396 "zone_management": false, 00:24:43.396 "zone_append": false, 00:24:43.396 "compare": false, 00:24:43.396 "compare_and_write": false, 00:24:43.396 "abort": true, 00:24:43.396 "seek_hole": false, 00:24:43.396 "seek_data": false, 00:24:43.396 "copy": true, 00:24:43.396 "nvme_iov_md": false 00:24:43.397 }, 00:24:43.397 "memory_domains": [ 00:24:43.397 { 00:24:43.397 "dma_device_id": "system", 00:24:43.397 "dma_device_type": 1 00:24:43.397 }, 00:24:43.397 { 00:24:43.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.397 "dma_device_type": 2 00:24:43.397 } 00:24:43.397 ], 00:24:43.397 "driver_specific": {} 00:24:43.397 } 00:24:43.397 ] 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:43.397 "name": "Existed_Raid", 00:24:43.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.397 "strip_size_kb": 0, 00:24:43.397 "state": "configuring", 00:24:43.397 "raid_level": "raid1", 00:24:43.397 "superblock": false, 00:24:43.397 "num_base_bdevs": 2, 00:24:43.397 "num_base_bdevs_discovered": 1, 00:24:43.397 "num_base_bdevs_operational": 2, 00:24:43.397 "base_bdevs_list": [ 00:24:43.397 { 00:24:43.397 "name": "BaseBdev1", 00:24:43.397 "uuid": "2dbe74c4-cf77-40e2-b71f-383f312d905a", 00:24:43.397 "is_configured": true, 00:24:43.397 "data_offset": 0, 00:24:43.397 "data_size": 65536 00:24:43.397 }, 00:24:43.397 { 00:24:43.397 "name": "BaseBdev2", 00:24:43.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.397 "is_configured": false, 00:24:43.397 "data_offset": 0, 00:24:43.397 "data_size": 0 00:24:43.397 } 00:24:43.397 ] 00:24:43.397 }' 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:43.397 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.964 [2024-10-28 13:35:57.820891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:43.964 [2024-10-28 13:35:57.820980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.964 [2024-10-28 13:35:57.828900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:43.964 [2024-10-28 13:35:57.831525] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:43.964 [2024-10-28 13:35:57.831605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:43.964 "name": "Existed_Raid", 00:24:43.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.964 "strip_size_kb": 0, 00:24:43.964 "state": "configuring", 00:24:43.964 "raid_level": "raid1", 00:24:43.964 "superblock": false, 00:24:43.964 "num_base_bdevs": 2, 00:24:43.964 "num_base_bdevs_discovered": 1, 00:24:43.964 "num_base_bdevs_operational": 2, 00:24:43.964 "base_bdevs_list": [ 00:24:43.964 { 00:24:43.964 "name": "BaseBdev1", 00:24:43.964 "uuid": "2dbe74c4-cf77-40e2-b71f-383f312d905a", 00:24:43.964 "is_configured": true, 00:24:43.964 "data_offset": 0, 00:24:43.964 "data_size": 65536 00:24:43.964 }, 00:24:43.964 { 00:24:43.964 "name": "BaseBdev2", 00:24:43.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.964 "is_configured": false, 00:24:43.964 "data_offset": 0, 00:24:43.964 "data_size": 0 00:24:43.964 } 00:24:43.964 ] 00:24:43.964 }' 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:43.964 13:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.223 [2024-10-28 13:35:58.362708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:44.223 [2024-10-28 13:35:58.362778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:44.223 [2024-10-28 13:35:58.362795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:44.223 [2024-10-28 13:35:58.363153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:44.223 [2024-10-28 13:35:58.363353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:44.223 [2024-10-28 13:35:58.363370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:24:44.223 [2024-10-28 13:35:58.363663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.223 BaseBdev2 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.223 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.481 [ 00:24:44.481 { 00:24:44.481 "name": "BaseBdev2", 00:24:44.481 "aliases": [ 00:24:44.481 "99aa916b-d707-41f1-9dd0-206381c949af" 00:24:44.481 ], 00:24:44.481 "product_name": "Malloc disk", 00:24:44.481 "block_size": 512, 00:24:44.481 "num_blocks": 65536, 00:24:44.481 "uuid": "99aa916b-d707-41f1-9dd0-206381c949af", 00:24:44.481 "assigned_rate_limits": { 00:24:44.481 "rw_ios_per_sec": 0, 00:24:44.481 "rw_mbytes_per_sec": 0, 00:24:44.481 "r_mbytes_per_sec": 0, 00:24:44.481 "w_mbytes_per_sec": 0 00:24:44.481 }, 00:24:44.481 "claimed": true, 00:24:44.481 "claim_type": "exclusive_write", 00:24:44.481 "zoned": false, 00:24:44.481 "supported_io_types": { 00:24:44.481 "read": true, 00:24:44.481 "write": true, 00:24:44.481 "unmap": true, 00:24:44.481 "flush": true, 00:24:44.481 "reset": true, 00:24:44.481 "nvme_admin": false, 00:24:44.481 "nvme_io": false, 00:24:44.481 "nvme_io_md": false, 00:24:44.481 "write_zeroes": true, 00:24:44.481 "zcopy": true, 00:24:44.481 "get_zone_info": false, 00:24:44.481 "zone_management": false, 00:24:44.481 "zone_append": false, 00:24:44.481 "compare": false, 00:24:44.481 "compare_and_write": false, 00:24:44.481 "abort": true, 00:24:44.481 "seek_hole": false, 00:24:44.481 "seek_data": false, 00:24:44.481 "copy": true, 00:24:44.481 "nvme_iov_md": false 00:24:44.481 }, 00:24:44.481 "memory_domains": [ 00:24:44.481 { 00:24:44.481 "dma_device_id": "system", 00:24:44.481 "dma_device_type": 1 00:24:44.481 }, 00:24:44.481 { 00:24:44.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:44.481 "dma_device_type": 2 00:24:44.481 } 00:24:44.481 ], 00:24:44.481 "driver_specific": {} 00:24:44.481 } 00:24:44.481 ] 00:24:44.481 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.481 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:24:44.481 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:44.481 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:44.481 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:44.481 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:44.481 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:44.481 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:44.481 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:44.481 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:44.481 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:44.481 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:44.481 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:44.482 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:44.482 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.482 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.482 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.482 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:44.482 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.482 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:44.482 "name": "Existed_Raid", 00:24:44.482 "uuid": "7b53500b-1ae7-4c43-adf9-5cf770af7d12", 00:24:44.482 "strip_size_kb": 0, 00:24:44.482 "state": "online", 00:24:44.482 "raid_level": "raid1", 00:24:44.482 "superblock": false, 00:24:44.482 "num_base_bdevs": 2, 00:24:44.482 "num_base_bdevs_discovered": 2, 00:24:44.482 "num_base_bdevs_operational": 2, 00:24:44.482 "base_bdevs_list": [ 00:24:44.482 { 00:24:44.482 "name": "BaseBdev1", 00:24:44.482 "uuid": "2dbe74c4-cf77-40e2-b71f-383f312d905a", 00:24:44.482 "is_configured": true, 00:24:44.482 "data_offset": 0, 00:24:44.482 "data_size": 65536 00:24:44.482 }, 00:24:44.482 { 00:24:44.482 "name": "BaseBdev2", 00:24:44.482 "uuid": "99aa916b-d707-41f1-9dd0-206381c949af", 00:24:44.482 "is_configured": true, 00:24:44.482 "data_offset": 0, 00:24:44.482 "data_size": 65536 00:24:44.482 } 00:24:44.482 ] 00:24:44.482 }' 00:24:44.482 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:44.482 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.049 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:45.049 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:45.049 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:45.049 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:45.049 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:45.049 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:45.049 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:45.049 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.049 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:45.049 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.049 [2024-10-28 13:35:58.939327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:45.049 13:35:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.049 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:45.049 "name": "Existed_Raid", 00:24:45.049 "aliases": [ 00:24:45.049 "7b53500b-1ae7-4c43-adf9-5cf770af7d12" 00:24:45.049 ], 00:24:45.049 "product_name": "Raid Volume", 00:24:45.049 "block_size": 512, 00:24:45.049 "num_blocks": 65536, 00:24:45.049 "uuid": "7b53500b-1ae7-4c43-adf9-5cf770af7d12", 00:24:45.049 "assigned_rate_limits": { 00:24:45.049 "rw_ios_per_sec": 0, 00:24:45.049 "rw_mbytes_per_sec": 0, 00:24:45.049 "r_mbytes_per_sec": 0, 00:24:45.049 "w_mbytes_per_sec": 0 00:24:45.049 }, 00:24:45.049 "claimed": false, 00:24:45.049 "zoned": false, 00:24:45.049 "supported_io_types": { 00:24:45.049 "read": true, 00:24:45.049 "write": true, 00:24:45.049 "unmap": false, 00:24:45.049 "flush": false, 00:24:45.049 "reset": true, 00:24:45.049 "nvme_admin": false, 00:24:45.049 "nvme_io": false, 00:24:45.049 "nvme_io_md": false, 00:24:45.049 "write_zeroes": true, 00:24:45.049 "zcopy": false, 00:24:45.049 "get_zone_info": false, 00:24:45.049 "zone_management": false, 00:24:45.049 "zone_append": false, 00:24:45.049 "compare": false, 00:24:45.049 "compare_and_write": false, 00:24:45.049 "abort": false, 00:24:45.049 "seek_hole": false, 00:24:45.049 "seek_data": false, 00:24:45.049 "copy": false, 00:24:45.049 "nvme_iov_md": false 00:24:45.049 }, 00:24:45.049 "memory_domains": [ 00:24:45.049 { 00:24:45.049 "dma_device_id": "system", 00:24:45.049 "dma_device_type": 1 00:24:45.049 }, 00:24:45.049 { 00:24:45.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.049 "dma_device_type": 2 00:24:45.049 }, 00:24:45.049 { 00:24:45.049 "dma_device_id": "system", 00:24:45.049 "dma_device_type": 1 00:24:45.049 }, 00:24:45.049 { 00:24:45.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.049 "dma_device_type": 2 00:24:45.049 } 00:24:45.049 ], 00:24:45.049 "driver_specific": { 00:24:45.049 "raid": { 00:24:45.049 "uuid": "7b53500b-1ae7-4c43-adf9-5cf770af7d12", 00:24:45.049 "strip_size_kb": 0, 00:24:45.049 "state": "online", 00:24:45.049 "raid_level": "raid1", 00:24:45.049 "superblock": false, 00:24:45.049 "num_base_bdevs": 2, 00:24:45.049 "num_base_bdevs_discovered": 2, 00:24:45.049 "num_base_bdevs_operational": 2, 00:24:45.049 "base_bdevs_list": [ 00:24:45.049 { 00:24:45.049 "name": "BaseBdev1", 00:24:45.049 "uuid": "2dbe74c4-cf77-40e2-b71f-383f312d905a", 00:24:45.049 "is_configured": true, 00:24:45.049 "data_offset": 0, 00:24:45.049 "data_size": 65536 00:24:45.049 }, 00:24:45.049 { 00:24:45.049 "name": "BaseBdev2", 00:24:45.049 "uuid": "99aa916b-d707-41f1-9dd0-206381c949af", 00:24:45.049 "is_configured": true, 00:24:45.049 "data_offset": 0, 00:24:45.049 "data_size": 65536 00:24:45.049 } 00:24:45.049 ] 00:24:45.049 } 00:24:45.049 } 00:24:45.049 }' 00:24:45.049 13:35:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:45.049 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:45.049 BaseBdev2' 00:24:45.049 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:45.049 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:45.049 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:45.049 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:45.049 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.049 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.049 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:45.049 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.049 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:45.049 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:45.049 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:45.049 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:45.050 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.050 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.050 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:45.050 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.050 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:45.050 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:45.050 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:45.050 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.050 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.050 [2024-10-28 13:35:59.203089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:45.308 "name": "Existed_Raid", 00:24:45.308 "uuid": "7b53500b-1ae7-4c43-adf9-5cf770af7d12", 00:24:45.308 "strip_size_kb": 0, 00:24:45.308 "state": "online", 00:24:45.308 "raid_level": "raid1", 00:24:45.308 "superblock": false, 00:24:45.308 "num_base_bdevs": 2, 00:24:45.308 "num_base_bdevs_discovered": 1, 00:24:45.308 "num_base_bdevs_operational": 1, 00:24:45.308 "base_bdevs_list": [ 00:24:45.308 { 00:24:45.308 "name": null, 00:24:45.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.308 "is_configured": false, 00:24:45.308 "data_offset": 0, 00:24:45.308 "data_size": 65536 00:24:45.308 }, 00:24:45.308 { 00:24:45.308 "name": "BaseBdev2", 00:24:45.308 "uuid": "99aa916b-d707-41f1-9dd0-206381c949af", 00:24:45.308 "is_configured": true, 00:24:45.308 "data_offset": 0, 00:24:45.308 "data_size": 65536 00:24:45.308 } 00:24:45.308 ] 00:24:45.308 }' 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:45.308 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.876 [2024-10-28 13:35:59.807994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:45.876 [2024-10-28 13:35:59.808177] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:45.876 [2024-10-28 13:35:59.822737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:45.876 [2024-10-28 13:35:59.822828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:45.876 [2024-10-28 13:35:59.822854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:45.876 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75576 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75576 ']' 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75576 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75576 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:45.877 killing process with pid 75576 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75576' 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75576 00:24:45.877 [2024-10-28 13:35:59.917632] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:45.877 13:35:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75576 00:24:45.877 [2024-10-28 13:35:59.919088] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:46.135 13:36:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:24:46.135 00:24:46.135 real 0m4.687s 00:24:46.135 user 0m7.671s 00:24:46.135 sys 0m0.783s 00:24:46.135 13:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:46.135 13:36:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.135 ************************************ 00:24:46.135 END TEST raid_state_function_test 00:24:46.135 ************************************ 00:24:46.135 13:36:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:24:46.135 13:36:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:46.135 13:36:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:46.135 13:36:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:46.136 ************************************ 00:24:46.136 START TEST raid_state_function_test_sb 00:24:46.136 ************************************ 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75828 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75828' 00:24:46.136 Process raid pid: 75828 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75828 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75828 ']' 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:46.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:46.136 13:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:46.394 [2024-10-28 13:36:00.341805] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:24:46.394 [2024-10-28 13:36:00.342038] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.394 [2024-10-28 13:36:00.497365] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:46.394 [2024-10-28 13:36:00.530893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.652 [2024-10-28 13:36:00.589416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.652 [2024-10-28 13:36:00.651973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:46.652 [2024-10-28 13:36:00.652030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:47.585 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.585 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:24:47.585 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:47.585 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.585 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.585 [2024-10-28 13:36:01.392022] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:47.585 [2024-10-28 13:36:01.392088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:47.585 [2024-10-28 13:36:01.392109] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:47.585 [2024-10-28 13:36:01.392123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:47.585 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.585 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:47.585 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:47.585 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:47.585 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:47.585 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:47.585 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:47.585 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:47.586 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:47.586 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:47.586 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:47.586 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.586 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.586 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:47.586 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.586 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.586 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:47.586 "name": "Existed_Raid", 00:24:47.586 "uuid": "af1081e3-cdfc-445f-b7af-6cd2cfad21b9", 00:24:47.586 "strip_size_kb": 0, 00:24:47.586 "state": "configuring", 00:24:47.586 "raid_level": "raid1", 00:24:47.586 "superblock": true, 00:24:47.586 "num_base_bdevs": 2, 00:24:47.586 "num_base_bdevs_discovered": 0, 00:24:47.586 "num_base_bdevs_operational": 2, 00:24:47.586 "base_bdevs_list": [ 00:24:47.586 { 00:24:47.586 "name": "BaseBdev1", 00:24:47.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.586 "is_configured": false, 00:24:47.586 "data_offset": 0, 00:24:47.586 "data_size": 0 00:24:47.586 }, 00:24:47.586 { 00:24:47.586 "name": "BaseBdev2", 00:24:47.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.586 "is_configured": false, 00:24:47.586 "data_offset": 0, 00:24:47.586 "data_size": 0 00:24:47.586 } 00:24:47.586 ] 00:24:47.586 }' 00:24:47.586 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:47.586 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.845 [2024-10-28 13:36:01.916046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:47.845 [2024-10-28 13:36:01.916089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.845 [2024-10-28 13:36:01.924057] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:47.845 [2024-10-28 13:36:01.924106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:47.845 [2024-10-28 13:36:01.924125] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:47.845 [2024-10-28 13:36:01.924153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.845 [2024-10-28 13:36:01.944186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:47.845 BaseBdev1 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.845 [ 00:24:47.845 { 00:24:47.845 "name": "BaseBdev1", 00:24:47.845 "aliases": [ 00:24:47.845 "786faaf1-2029-4c4b-9064-9b51852f3ab2" 00:24:47.845 ], 00:24:47.845 "product_name": "Malloc disk", 00:24:47.845 "block_size": 512, 00:24:47.845 "num_blocks": 65536, 00:24:47.845 "uuid": "786faaf1-2029-4c4b-9064-9b51852f3ab2", 00:24:47.845 "assigned_rate_limits": { 00:24:47.845 "rw_ios_per_sec": 0, 00:24:47.845 "rw_mbytes_per_sec": 0, 00:24:47.845 "r_mbytes_per_sec": 0, 00:24:47.845 "w_mbytes_per_sec": 0 00:24:47.845 }, 00:24:47.845 "claimed": true, 00:24:47.845 "claim_type": "exclusive_write", 00:24:47.845 "zoned": false, 00:24:47.845 "supported_io_types": { 00:24:47.845 "read": true, 00:24:47.845 "write": true, 00:24:47.845 "unmap": true, 00:24:47.845 "flush": true, 00:24:47.845 "reset": true, 00:24:47.845 "nvme_admin": false, 00:24:47.845 "nvme_io": false, 00:24:47.845 "nvme_io_md": false, 00:24:47.845 "write_zeroes": true, 00:24:47.845 "zcopy": true, 00:24:47.845 "get_zone_info": false, 00:24:47.845 "zone_management": false, 00:24:47.845 "zone_append": false, 00:24:47.845 "compare": false, 00:24:47.845 "compare_and_write": false, 00:24:47.845 "abort": true, 00:24:47.845 "seek_hole": false, 00:24:47.845 "seek_data": false, 00:24:47.845 "copy": true, 00:24:47.845 "nvme_iov_md": false 00:24:47.845 }, 00:24:47.845 "memory_domains": [ 00:24:47.845 { 00:24:47.845 "dma_device_id": "system", 00:24:47.845 "dma_device_type": 1 00:24:47.845 }, 00:24:47.845 { 00:24:47.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.845 "dma_device_type": 2 00:24:47.845 } 00:24:47.845 ], 00:24:47.845 "driver_specific": {} 00:24:47.845 } 00:24:47.845 ] 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:47.845 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.846 13:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.104 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:48.104 "name": "Existed_Raid", 00:24:48.104 "uuid": "49b0fd29-de0f-489e-8ed6-670f7ab9c023", 00:24:48.104 "strip_size_kb": 0, 00:24:48.104 "state": "configuring", 00:24:48.104 "raid_level": "raid1", 00:24:48.104 "superblock": true, 00:24:48.104 "num_base_bdevs": 2, 00:24:48.104 "num_base_bdevs_discovered": 1, 00:24:48.104 "num_base_bdevs_operational": 2, 00:24:48.104 "base_bdevs_list": [ 00:24:48.104 { 00:24:48.104 "name": "BaseBdev1", 00:24:48.104 "uuid": "786faaf1-2029-4c4b-9064-9b51852f3ab2", 00:24:48.104 "is_configured": true, 00:24:48.104 "data_offset": 2048, 00:24:48.104 "data_size": 63488 00:24:48.104 }, 00:24:48.104 { 00:24:48.104 "name": "BaseBdev2", 00:24:48.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.104 "is_configured": false, 00:24:48.104 "data_offset": 0, 00:24:48.104 "data_size": 0 00:24:48.104 } 00:24:48.104 ] 00:24:48.104 }' 00:24:48.104 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:48.104 13:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.363 [2024-10-28 13:36:02.492440] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:48.363 [2024-10-28 13:36:02.492539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.363 [2024-10-28 13:36:02.500477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:48.363 [2024-10-28 13:36:02.503029] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:48.363 [2024-10-28 13:36:02.503088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.363 13:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.364 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:48.364 13:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.622 13:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.622 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:48.622 "name": "Existed_Raid", 00:24:48.622 "uuid": "9e814716-14c3-446d-b58e-470712257695", 00:24:48.622 "strip_size_kb": 0, 00:24:48.622 "state": "configuring", 00:24:48.622 "raid_level": "raid1", 00:24:48.622 "superblock": true, 00:24:48.622 "num_base_bdevs": 2, 00:24:48.622 "num_base_bdevs_discovered": 1, 00:24:48.622 "num_base_bdevs_operational": 2, 00:24:48.622 "base_bdevs_list": [ 00:24:48.622 { 00:24:48.622 "name": "BaseBdev1", 00:24:48.622 "uuid": "786faaf1-2029-4c4b-9064-9b51852f3ab2", 00:24:48.622 "is_configured": true, 00:24:48.622 "data_offset": 2048, 00:24:48.622 "data_size": 63488 00:24:48.622 }, 00:24:48.622 { 00:24:48.622 "name": "BaseBdev2", 00:24:48.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.622 "is_configured": false, 00:24:48.622 "data_offset": 0, 00:24:48.622 "data_size": 0 00:24:48.622 } 00:24:48.622 ] 00:24:48.622 }' 00:24:48.622 13:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:48.622 13:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.880 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:48.880 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.880 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.880 [2024-10-28 13:36:03.033990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:48.880 [2024-10-28 13:36:03.034285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:48.880 [2024-10-28 13:36:03.034319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:48.880 [2024-10-28 13:36:03.034674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:48.880 BaseBdev2 00:24:48.880 [2024-10-28 13:36:03.034908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:48.880 [2024-10-28 13:36:03.034932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:24:48.880 [2024-10-28 13:36:03.035091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:48.880 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.880 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:48.880 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:48.880 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:48.880 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:48.880 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:48.880 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:48.880 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:24:48.880 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.880 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.140 [ 00:24:49.140 { 00:24:49.140 "name": "BaseBdev2", 00:24:49.140 "aliases": [ 00:24:49.140 "c20c42e2-06b0-49f8-b2e7-8f0830e2f230" 00:24:49.140 ], 00:24:49.140 "product_name": "Malloc disk", 00:24:49.140 "block_size": 512, 00:24:49.140 "num_blocks": 65536, 00:24:49.140 "uuid": "c20c42e2-06b0-49f8-b2e7-8f0830e2f230", 00:24:49.140 "assigned_rate_limits": { 00:24:49.140 "rw_ios_per_sec": 0, 00:24:49.140 "rw_mbytes_per_sec": 0, 00:24:49.140 "r_mbytes_per_sec": 0, 00:24:49.140 "w_mbytes_per_sec": 0 00:24:49.140 }, 00:24:49.140 "claimed": true, 00:24:49.140 "claim_type": "exclusive_write", 00:24:49.140 "zoned": false, 00:24:49.140 "supported_io_types": { 00:24:49.140 "read": true, 00:24:49.140 "write": true, 00:24:49.140 "unmap": true, 00:24:49.140 "flush": true, 00:24:49.140 "reset": true, 00:24:49.140 "nvme_admin": false, 00:24:49.140 "nvme_io": false, 00:24:49.140 "nvme_io_md": false, 00:24:49.140 "write_zeroes": true, 00:24:49.140 "zcopy": true, 00:24:49.140 "get_zone_info": false, 00:24:49.140 "zone_management": false, 00:24:49.140 "zone_append": false, 00:24:49.140 "compare": false, 00:24:49.140 "compare_and_write": false, 00:24:49.140 "abort": true, 00:24:49.140 "seek_hole": false, 00:24:49.140 "seek_data": false, 00:24:49.140 "copy": true, 00:24:49.140 "nvme_iov_md": false 00:24:49.140 }, 00:24:49.140 "memory_domains": [ 00:24:49.140 { 00:24:49.140 "dma_device_id": "system", 00:24:49.140 "dma_device_type": 1 00:24:49.140 }, 00:24:49.140 { 00:24:49.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.140 "dma_device_type": 2 00:24:49.140 } 00:24:49.140 ], 00:24:49.140 "driver_specific": {} 00:24:49.140 } 00:24:49.140 ] 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:49.140 "name": "Existed_Raid", 00:24:49.140 "uuid": "9e814716-14c3-446d-b58e-470712257695", 00:24:49.140 "strip_size_kb": 0, 00:24:49.140 "state": "online", 00:24:49.140 "raid_level": "raid1", 00:24:49.140 "superblock": true, 00:24:49.140 "num_base_bdevs": 2, 00:24:49.140 "num_base_bdevs_discovered": 2, 00:24:49.140 "num_base_bdevs_operational": 2, 00:24:49.140 "base_bdevs_list": [ 00:24:49.140 { 00:24:49.140 "name": "BaseBdev1", 00:24:49.140 "uuid": "786faaf1-2029-4c4b-9064-9b51852f3ab2", 00:24:49.140 "is_configured": true, 00:24:49.140 "data_offset": 2048, 00:24:49.140 "data_size": 63488 00:24:49.140 }, 00:24:49.140 { 00:24:49.140 "name": "BaseBdev2", 00:24:49.140 "uuid": "c20c42e2-06b0-49f8-b2e7-8f0830e2f230", 00:24:49.140 "is_configured": true, 00:24:49.140 "data_offset": 2048, 00:24:49.140 "data_size": 63488 00:24:49.140 } 00:24:49.140 ] 00:24:49.140 }' 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:49.140 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.708 [2024-10-28 13:36:03.614595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:49.708 "name": "Existed_Raid", 00:24:49.708 "aliases": [ 00:24:49.708 "9e814716-14c3-446d-b58e-470712257695" 00:24:49.708 ], 00:24:49.708 "product_name": "Raid Volume", 00:24:49.708 "block_size": 512, 00:24:49.708 "num_blocks": 63488, 00:24:49.708 "uuid": "9e814716-14c3-446d-b58e-470712257695", 00:24:49.708 "assigned_rate_limits": { 00:24:49.708 "rw_ios_per_sec": 0, 00:24:49.708 "rw_mbytes_per_sec": 0, 00:24:49.708 "r_mbytes_per_sec": 0, 00:24:49.708 "w_mbytes_per_sec": 0 00:24:49.708 }, 00:24:49.708 "claimed": false, 00:24:49.708 "zoned": false, 00:24:49.708 "supported_io_types": { 00:24:49.708 "read": true, 00:24:49.708 "write": true, 00:24:49.708 "unmap": false, 00:24:49.708 "flush": false, 00:24:49.708 "reset": true, 00:24:49.708 "nvme_admin": false, 00:24:49.708 "nvme_io": false, 00:24:49.708 "nvme_io_md": false, 00:24:49.708 "write_zeroes": true, 00:24:49.708 "zcopy": false, 00:24:49.708 "get_zone_info": false, 00:24:49.708 "zone_management": false, 00:24:49.708 "zone_append": false, 00:24:49.708 "compare": false, 00:24:49.708 "compare_and_write": false, 00:24:49.708 "abort": false, 00:24:49.708 "seek_hole": false, 00:24:49.708 "seek_data": false, 00:24:49.708 "copy": false, 00:24:49.708 "nvme_iov_md": false 00:24:49.708 }, 00:24:49.708 "memory_domains": [ 00:24:49.708 { 00:24:49.708 "dma_device_id": "system", 00:24:49.708 "dma_device_type": 1 00:24:49.708 }, 00:24:49.708 { 00:24:49.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.708 "dma_device_type": 2 00:24:49.708 }, 00:24:49.708 { 00:24:49.708 "dma_device_id": "system", 00:24:49.708 "dma_device_type": 1 00:24:49.708 }, 00:24:49.708 { 00:24:49.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.708 "dma_device_type": 2 00:24:49.708 } 00:24:49.708 ], 00:24:49.708 "driver_specific": { 00:24:49.708 "raid": { 00:24:49.708 "uuid": "9e814716-14c3-446d-b58e-470712257695", 00:24:49.708 "strip_size_kb": 0, 00:24:49.708 "state": "online", 00:24:49.708 "raid_level": "raid1", 00:24:49.708 "superblock": true, 00:24:49.708 "num_base_bdevs": 2, 00:24:49.708 "num_base_bdevs_discovered": 2, 00:24:49.708 "num_base_bdevs_operational": 2, 00:24:49.708 "base_bdevs_list": [ 00:24:49.708 { 00:24:49.708 "name": "BaseBdev1", 00:24:49.708 "uuid": "786faaf1-2029-4c4b-9064-9b51852f3ab2", 00:24:49.708 "is_configured": true, 00:24:49.708 "data_offset": 2048, 00:24:49.708 "data_size": 63488 00:24:49.708 }, 00:24:49.708 { 00:24:49.708 "name": "BaseBdev2", 00:24:49.708 "uuid": "c20c42e2-06b0-49f8-b2e7-8f0830e2f230", 00:24:49.708 "is_configured": true, 00:24:49.708 "data_offset": 2048, 00:24:49.708 "data_size": 63488 00:24:49.708 } 00:24:49.708 ] 00:24:49.708 } 00:24:49.708 } 00:24:49.708 }' 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:49.708 BaseBdev2' 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:49.708 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.968 [2024-10-28 13:36:03.870401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:49.968 "name": "Existed_Raid", 00:24:49.968 "uuid": "9e814716-14c3-446d-b58e-470712257695", 00:24:49.968 "strip_size_kb": 0, 00:24:49.968 "state": "online", 00:24:49.968 "raid_level": "raid1", 00:24:49.968 "superblock": true, 00:24:49.968 "num_base_bdevs": 2, 00:24:49.968 "num_base_bdevs_discovered": 1, 00:24:49.968 "num_base_bdevs_operational": 1, 00:24:49.968 "base_bdevs_list": [ 00:24:49.968 { 00:24:49.968 "name": null, 00:24:49.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.968 "is_configured": false, 00:24:49.968 "data_offset": 0, 00:24:49.968 "data_size": 63488 00:24:49.968 }, 00:24:49.968 { 00:24:49.968 "name": "BaseBdev2", 00:24:49.968 "uuid": "c20c42e2-06b0-49f8-b2e7-8f0830e2f230", 00:24:49.968 "is_configured": true, 00:24:49.968 "data_offset": 2048, 00:24:49.968 "data_size": 63488 00:24:49.968 } 00:24:49.968 ] 00:24:49.968 }' 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:49.968 13:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.536 [2024-10-28 13:36:04.464731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:50.536 [2024-10-28 13:36:04.464911] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:50.536 [2024-10-28 13:36:04.478676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:50.536 [2024-10-28 13:36:04.478749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:50.536 [2024-10-28 13:36:04.478766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75828 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75828 ']' 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75828 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75828 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:50.536 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:50.537 killing process with pid 75828 00:24:50.537 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75828' 00:24:50.537 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75828 00:24:50.537 [2024-10-28 13:36:04.578500] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:50.537 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75828 00:24:50.537 [2024-10-28 13:36:04.579831] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:50.811 13:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:24:50.811 00:24:50.811 real 0m4.592s 00:24:50.811 user 0m7.503s 00:24:50.811 sys 0m0.812s 00:24:50.811 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:50.811 13:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.811 ************************************ 00:24:50.811 END TEST raid_state_function_test_sb 00:24:50.811 ************************************ 00:24:50.811 13:36:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:24:50.811 13:36:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:24:50.811 13:36:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:50.811 13:36:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:50.811 ************************************ 00:24:50.811 START TEST raid_superblock_test 00:24:50.811 ************************************ 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76070 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76070 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76070 ']' 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.811 13:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.102 [2024-10-28 13:36:04.990534] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:24:51.102 [2024-10-28 13:36:04.990747] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76070 ] 00:24:51.102 [2024-10-28 13:36:05.143805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:51.102 [2024-10-28 13:36:05.180840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.102 [2024-10-28 13:36:05.240333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.360 [2024-10-28 13:36:05.305212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:51.360 [2024-10-28 13:36:05.305266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.927 malloc1 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.927 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.186 [2024-10-28 13:36:06.084920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:52.186 [2024-10-28 13:36:06.085015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:52.186 [2024-10-28 13:36:06.085051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:52.186 [2024-10-28 13:36:06.085070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:52.186 [2024-10-28 13:36:06.088016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:52.186 [2024-10-28 13:36:06.088062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:52.186 pt1 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.186 malloc2 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.186 [2024-10-28 13:36:06.120679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:52.186 [2024-10-28 13:36:06.120769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:52.186 [2024-10-28 13:36:06.120800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:52.186 [2024-10-28 13:36:06.120816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:52.186 [2024-10-28 13:36:06.123637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:52.186 [2024-10-28 13:36:06.123682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:52.186 pt2 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.186 [2024-10-28 13:36:06.132717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:52.186 [2024-10-28 13:36:06.135198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:52.186 [2024-10-28 13:36:06.135402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:24:52.186 [2024-10-28 13:36:06.135430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:52.186 [2024-10-28 13:36:06.135793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:52.186 [2024-10-28 13:36:06.135996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:24:52.186 [2024-10-28 13:36:06.136036] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:24:52.186 [2024-10-28 13:36:06.136237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:52.186 "name": "raid_bdev1", 00:24:52.186 "uuid": "8576ab63-cbe1-41c9-b76e-e31f22ba246b", 00:24:52.186 "strip_size_kb": 0, 00:24:52.186 "state": "online", 00:24:52.186 "raid_level": "raid1", 00:24:52.186 "superblock": true, 00:24:52.186 "num_base_bdevs": 2, 00:24:52.186 "num_base_bdevs_discovered": 2, 00:24:52.186 "num_base_bdevs_operational": 2, 00:24:52.186 "base_bdevs_list": [ 00:24:52.186 { 00:24:52.186 "name": "pt1", 00:24:52.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:52.186 "is_configured": true, 00:24:52.186 "data_offset": 2048, 00:24:52.186 "data_size": 63488 00:24:52.186 }, 00:24:52.186 { 00:24:52.186 "name": "pt2", 00:24:52.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:52.186 "is_configured": true, 00:24:52.186 "data_offset": 2048, 00:24:52.186 "data_size": 63488 00:24:52.186 } 00:24:52.186 ] 00:24:52.186 }' 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:52.186 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.753 [2024-10-28 13:36:06.673310] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:52.753 "name": "raid_bdev1", 00:24:52.753 "aliases": [ 00:24:52.753 "8576ab63-cbe1-41c9-b76e-e31f22ba246b" 00:24:52.753 ], 00:24:52.753 "product_name": "Raid Volume", 00:24:52.753 "block_size": 512, 00:24:52.753 "num_blocks": 63488, 00:24:52.753 "uuid": "8576ab63-cbe1-41c9-b76e-e31f22ba246b", 00:24:52.753 "assigned_rate_limits": { 00:24:52.753 "rw_ios_per_sec": 0, 00:24:52.753 "rw_mbytes_per_sec": 0, 00:24:52.753 "r_mbytes_per_sec": 0, 00:24:52.753 "w_mbytes_per_sec": 0 00:24:52.753 }, 00:24:52.753 "claimed": false, 00:24:52.753 "zoned": false, 00:24:52.753 "supported_io_types": { 00:24:52.753 "read": true, 00:24:52.753 "write": true, 00:24:52.753 "unmap": false, 00:24:52.753 "flush": false, 00:24:52.753 "reset": true, 00:24:52.753 "nvme_admin": false, 00:24:52.753 "nvme_io": false, 00:24:52.753 "nvme_io_md": false, 00:24:52.753 "write_zeroes": true, 00:24:52.753 "zcopy": false, 00:24:52.753 "get_zone_info": false, 00:24:52.753 "zone_management": false, 00:24:52.753 "zone_append": false, 00:24:52.753 "compare": false, 00:24:52.753 "compare_and_write": false, 00:24:52.753 "abort": false, 00:24:52.753 "seek_hole": false, 00:24:52.753 "seek_data": false, 00:24:52.753 "copy": false, 00:24:52.753 "nvme_iov_md": false 00:24:52.753 }, 00:24:52.753 "memory_domains": [ 00:24:52.753 { 00:24:52.753 "dma_device_id": "system", 00:24:52.753 "dma_device_type": 1 00:24:52.753 }, 00:24:52.753 { 00:24:52.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.753 "dma_device_type": 2 00:24:52.753 }, 00:24:52.753 { 00:24:52.753 "dma_device_id": "system", 00:24:52.753 "dma_device_type": 1 00:24:52.753 }, 00:24:52.753 { 00:24:52.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.753 "dma_device_type": 2 00:24:52.753 } 00:24:52.753 ], 00:24:52.753 "driver_specific": { 00:24:52.753 "raid": { 00:24:52.753 "uuid": "8576ab63-cbe1-41c9-b76e-e31f22ba246b", 00:24:52.753 "strip_size_kb": 0, 00:24:52.753 "state": "online", 00:24:52.753 "raid_level": "raid1", 00:24:52.753 "superblock": true, 00:24:52.753 "num_base_bdevs": 2, 00:24:52.753 "num_base_bdevs_discovered": 2, 00:24:52.753 "num_base_bdevs_operational": 2, 00:24:52.753 "base_bdevs_list": [ 00:24:52.753 { 00:24:52.753 "name": "pt1", 00:24:52.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:52.753 "is_configured": true, 00:24:52.753 "data_offset": 2048, 00:24:52.753 "data_size": 63488 00:24:52.753 }, 00:24:52.753 { 00:24:52.753 "name": "pt2", 00:24:52.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:52.753 "is_configured": true, 00:24:52.753 "data_offset": 2048, 00:24:52.753 "data_size": 63488 00:24:52.753 } 00:24:52.753 ] 00:24:52.753 } 00:24:52.753 } 00:24:52.753 }' 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:52.753 pt2' 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:52.753 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.754 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.011 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.012 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:53.012 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:53.012 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:53.012 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.012 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.012 13:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:53.012 [2024-10-28 13:36:06.957257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:53.012 13:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8576ab63-cbe1-41c9-b76e-e31f22ba246b 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8576ab63-cbe1-41c9-b76e-e31f22ba246b ']' 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.012 [2024-10-28 13:36:07.012943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:53.012 [2024-10-28 13:36:07.013165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:53.012 [2024-10-28 13:36:07.013394] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:53.012 [2024-10-28 13:36:07.013629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:53.012 [2024-10-28 13:36:07.013774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.012 [2024-10-28 13:36:07.157027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:53.012 [2024-10-28 13:36:07.159710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:53.012 [2024-10-28 13:36:07.159815] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:53.012 [2024-10-28 13:36:07.159895] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:53.012 [2024-10-28 13:36:07.159938] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:53.012 [2024-10-28 13:36:07.159964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:24:53.012 request: 00:24:53.012 { 00:24:53.012 "name": "raid_bdev1", 00:24:53.012 "raid_level": "raid1", 00:24:53.012 "base_bdevs": [ 00:24:53.012 "malloc1", 00:24:53.012 "malloc2" 00:24:53.012 ], 00:24:53.012 "superblock": false, 00:24:53.012 "method": "bdev_raid_create", 00:24:53.012 "req_id": 1 00:24:53.012 } 00:24:53.012 Got JSON-RPC error response 00:24:53.012 response: 00:24:53.012 { 00:24:53.012 "code": -17, 00:24:53.012 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:53.012 } 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:53.012 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.270 [2024-10-28 13:36:07.225009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:53.270 [2024-10-28 13:36:07.225331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.270 [2024-10-28 13:36:07.225413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:53.270 [2024-10-28 13:36:07.225597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.270 [2024-10-28 13:36:07.228642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.270 [2024-10-28 13:36:07.228825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:53.270 [2024-10-28 13:36:07.229055] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:53.270 [2024-10-28 13:36:07.229252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:53.270 pt1 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:53.270 "name": "raid_bdev1", 00:24:53.270 "uuid": "8576ab63-cbe1-41c9-b76e-e31f22ba246b", 00:24:53.270 "strip_size_kb": 0, 00:24:53.270 "state": "configuring", 00:24:53.270 "raid_level": "raid1", 00:24:53.270 "superblock": true, 00:24:53.270 "num_base_bdevs": 2, 00:24:53.270 "num_base_bdevs_discovered": 1, 00:24:53.270 "num_base_bdevs_operational": 2, 00:24:53.270 "base_bdevs_list": [ 00:24:53.270 { 00:24:53.270 "name": "pt1", 00:24:53.270 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:53.270 "is_configured": true, 00:24:53.270 "data_offset": 2048, 00:24:53.270 "data_size": 63488 00:24:53.270 }, 00:24:53.270 { 00:24:53.270 "name": null, 00:24:53.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:53.270 "is_configured": false, 00:24:53.270 "data_offset": 2048, 00:24:53.270 "data_size": 63488 00:24:53.270 } 00:24:53.270 ] 00:24:53.270 }' 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:53.270 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.838 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:53.838 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:53.838 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:53.838 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:53.838 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.838 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.838 [2024-10-28 13:36:07.789403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:53.838 [2024-10-28 13:36:07.789541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.838 [2024-10-28 13:36:07.789580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:53.838 [2024-10-28 13:36:07.789601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.838 [2024-10-28 13:36:07.790304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.838 [2024-10-28 13:36:07.790359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:53.838 [2024-10-28 13:36:07.790488] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:53.838 [2024-10-28 13:36:07.790542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:53.838 [2024-10-28 13:36:07.790693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:53.838 [2024-10-28 13:36:07.790733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:53.838 [2024-10-28 13:36:07.791082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:53.838 [2024-10-28 13:36:07.791324] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:53.838 [2024-10-28 13:36:07.791600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:53.838 [2024-10-28 13:36:07.791787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:53.838 pt2 00:24:53.838 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.838 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:53.838 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:53.838 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:53.838 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:53.838 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:53.838 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:53.838 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:53.839 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:53.839 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:53.839 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:53.839 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:53.839 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:53.839 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.839 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.839 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.839 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.839 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.839 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:53.839 "name": "raid_bdev1", 00:24:53.839 "uuid": "8576ab63-cbe1-41c9-b76e-e31f22ba246b", 00:24:53.839 "strip_size_kb": 0, 00:24:53.839 "state": "online", 00:24:53.839 "raid_level": "raid1", 00:24:53.839 "superblock": true, 00:24:53.839 "num_base_bdevs": 2, 00:24:53.839 "num_base_bdevs_discovered": 2, 00:24:53.839 "num_base_bdevs_operational": 2, 00:24:53.839 "base_bdevs_list": [ 00:24:53.839 { 00:24:53.839 "name": "pt1", 00:24:53.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:53.839 "is_configured": true, 00:24:53.839 "data_offset": 2048, 00:24:53.839 "data_size": 63488 00:24:53.839 }, 00:24:53.839 { 00:24:53.839 "name": "pt2", 00:24:53.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:53.839 "is_configured": true, 00:24:53.839 "data_offset": 2048, 00:24:53.839 "data_size": 63488 00:24:53.839 } 00:24:53.839 ] 00:24:53.839 }' 00:24:53.839 13:36:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:53.839 13:36:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.410 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:54.410 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:54.410 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:54.410 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:54.410 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:54.410 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:54.410 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:54.410 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.410 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:54.410 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.410 [2024-10-28 13:36:08.325856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:54.410 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.410 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:54.410 "name": "raid_bdev1", 00:24:54.410 "aliases": [ 00:24:54.410 "8576ab63-cbe1-41c9-b76e-e31f22ba246b" 00:24:54.410 ], 00:24:54.410 "product_name": "Raid Volume", 00:24:54.410 "block_size": 512, 00:24:54.410 "num_blocks": 63488, 00:24:54.410 "uuid": "8576ab63-cbe1-41c9-b76e-e31f22ba246b", 00:24:54.410 "assigned_rate_limits": { 00:24:54.410 "rw_ios_per_sec": 0, 00:24:54.410 "rw_mbytes_per_sec": 0, 00:24:54.410 "r_mbytes_per_sec": 0, 00:24:54.410 "w_mbytes_per_sec": 0 00:24:54.410 }, 00:24:54.410 "claimed": false, 00:24:54.410 "zoned": false, 00:24:54.410 "supported_io_types": { 00:24:54.410 "read": true, 00:24:54.410 "write": true, 00:24:54.410 "unmap": false, 00:24:54.410 "flush": false, 00:24:54.410 "reset": true, 00:24:54.410 "nvme_admin": false, 00:24:54.410 "nvme_io": false, 00:24:54.410 "nvme_io_md": false, 00:24:54.410 "write_zeroes": true, 00:24:54.410 "zcopy": false, 00:24:54.410 "get_zone_info": false, 00:24:54.410 "zone_management": false, 00:24:54.410 "zone_append": false, 00:24:54.410 "compare": false, 00:24:54.410 "compare_and_write": false, 00:24:54.410 "abort": false, 00:24:54.410 "seek_hole": false, 00:24:54.410 "seek_data": false, 00:24:54.410 "copy": false, 00:24:54.410 "nvme_iov_md": false 00:24:54.410 }, 00:24:54.410 "memory_domains": [ 00:24:54.410 { 00:24:54.410 "dma_device_id": "system", 00:24:54.410 "dma_device_type": 1 00:24:54.410 }, 00:24:54.410 { 00:24:54.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:54.410 "dma_device_type": 2 00:24:54.410 }, 00:24:54.410 { 00:24:54.410 "dma_device_id": "system", 00:24:54.410 "dma_device_type": 1 00:24:54.410 }, 00:24:54.410 { 00:24:54.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:54.410 "dma_device_type": 2 00:24:54.410 } 00:24:54.410 ], 00:24:54.410 "driver_specific": { 00:24:54.410 "raid": { 00:24:54.410 "uuid": "8576ab63-cbe1-41c9-b76e-e31f22ba246b", 00:24:54.410 "strip_size_kb": 0, 00:24:54.410 "state": "online", 00:24:54.410 "raid_level": "raid1", 00:24:54.410 "superblock": true, 00:24:54.410 "num_base_bdevs": 2, 00:24:54.410 "num_base_bdevs_discovered": 2, 00:24:54.410 "num_base_bdevs_operational": 2, 00:24:54.410 "base_bdevs_list": [ 00:24:54.410 { 00:24:54.410 "name": "pt1", 00:24:54.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:54.411 "is_configured": true, 00:24:54.411 "data_offset": 2048, 00:24:54.411 "data_size": 63488 00:24:54.411 }, 00:24:54.411 { 00:24:54.411 "name": "pt2", 00:24:54.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:54.411 "is_configured": true, 00:24:54.411 "data_offset": 2048, 00:24:54.411 "data_size": 63488 00:24:54.411 } 00:24:54.411 ] 00:24:54.411 } 00:24:54.411 } 00:24:54.411 }' 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:54.411 pt2' 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.411 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.669 [2024-10-28 13:36:08.605907] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8576ab63-cbe1-41c9-b76e-e31f22ba246b '!=' 8576ab63-cbe1-41c9-b76e-e31f22ba246b ']' 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.669 [2024-10-28 13:36:08.657626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.669 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:54.669 "name": "raid_bdev1", 00:24:54.669 "uuid": "8576ab63-cbe1-41c9-b76e-e31f22ba246b", 00:24:54.669 "strip_size_kb": 0, 00:24:54.669 "state": "online", 00:24:54.669 "raid_level": "raid1", 00:24:54.669 "superblock": true, 00:24:54.669 "num_base_bdevs": 2, 00:24:54.669 "num_base_bdevs_discovered": 1, 00:24:54.669 "num_base_bdevs_operational": 1, 00:24:54.669 "base_bdevs_list": [ 00:24:54.669 { 00:24:54.669 "name": null, 00:24:54.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.669 "is_configured": false, 00:24:54.669 "data_offset": 0, 00:24:54.669 "data_size": 63488 00:24:54.669 }, 00:24:54.669 { 00:24:54.669 "name": "pt2", 00:24:54.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:54.669 "is_configured": true, 00:24:54.669 "data_offset": 2048, 00:24:54.669 "data_size": 63488 00:24:54.669 } 00:24:54.669 ] 00:24:54.670 }' 00:24:54.670 13:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:54.670 13:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.236 [2024-10-28 13:36:09.177719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:55.236 [2024-10-28 13:36:09.177775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:55.236 [2024-10-28 13:36:09.177882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:55.236 [2024-10-28 13:36:09.177949] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:55.236 [2024-10-28 13:36:09.177969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.236 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.236 [2024-10-28 13:36:09.249767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:55.236 [2024-10-28 13:36:09.249868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.236 [2024-10-28 13:36:09.249896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:55.236 [2024-10-28 13:36:09.249916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.236 [2024-10-28 13:36:09.252903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.236 [2024-10-28 13:36:09.252960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:55.236 [2024-10-28 13:36:09.253066] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:55.236 [2024-10-28 13:36:09.253121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:55.236 [2024-10-28 13:36:09.253245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:55.236 [2024-10-28 13:36:09.253266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:55.236 [2024-10-28 13:36:09.253562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:24:55.237 [2024-10-28 13:36:09.253736] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:55.237 [2024-10-28 13:36:09.253752] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:55.237 [2024-10-28 13:36:09.253948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:55.237 pt2 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:55.237 "name": "raid_bdev1", 00:24:55.237 "uuid": "8576ab63-cbe1-41c9-b76e-e31f22ba246b", 00:24:55.237 "strip_size_kb": 0, 00:24:55.237 "state": "online", 00:24:55.237 "raid_level": "raid1", 00:24:55.237 "superblock": true, 00:24:55.237 "num_base_bdevs": 2, 00:24:55.237 "num_base_bdevs_discovered": 1, 00:24:55.237 "num_base_bdevs_operational": 1, 00:24:55.237 "base_bdevs_list": [ 00:24:55.237 { 00:24:55.237 "name": null, 00:24:55.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.237 "is_configured": false, 00:24:55.237 "data_offset": 2048, 00:24:55.237 "data_size": 63488 00:24:55.237 }, 00:24:55.237 { 00:24:55.237 "name": "pt2", 00:24:55.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:55.237 "is_configured": true, 00:24:55.237 "data_offset": 2048, 00:24:55.237 "data_size": 63488 00:24:55.237 } 00:24:55.237 ] 00:24:55.237 }' 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:55.237 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.803 [2024-10-28 13:36:09.774107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:55.803 [2024-10-28 13:36:09.774190] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:55.803 [2024-10-28 13:36:09.774290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:55.803 [2024-10-28 13:36:09.774363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:55.803 [2024-10-28 13:36:09.774380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.803 [2024-10-28 13:36:09.834114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:55.803 [2024-10-28 13:36:09.834239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.803 [2024-10-28 13:36:09.834281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:55.803 [2024-10-28 13:36:09.834296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.803 [2024-10-28 13:36:09.837251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.803 [2024-10-28 13:36:09.837300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:55.803 [2024-10-28 13:36:09.837412] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:55.803 [2024-10-28 13:36:09.837457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:55.803 [2024-10-28 13:36:09.837607] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:55.803 [2024-10-28 13:36:09.837625] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:55.803 [2024-10-28 13:36:09.837658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:24:55.803 [2024-10-28 13:36:09.837709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:55.803 [2024-10-28 13:36:09.837824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:24:55.803 [2024-10-28 13:36:09.837841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:55.803 [2024-10-28 13:36:09.838169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:55.803 [2024-10-28 13:36:09.838329] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:24:55.803 [2024-10-28 13:36:09.838351] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:24:55.803 [2024-10-28 13:36:09.838569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:55.803 pt1 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:55.803 "name": "raid_bdev1", 00:24:55.803 "uuid": "8576ab63-cbe1-41c9-b76e-e31f22ba246b", 00:24:55.803 "strip_size_kb": 0, 00:24:55.803 "state": "online", 00:24:55.803 "raid_level": "raid1", 00:24:55.803 "superblock": true, 00:24:55.803 "num_base_bdevs": 2, 00:24:55.803 "num_base_bdevs_discovered": 1, 00:24:55.803 "num_base_bdevs_operational": 1, 00:24:55.803 "base_bdevs_list": [ 00:24:55.803 { 00:24:55.803 "name": null, 00:24:55.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.803 "is_configured": false, 00:24:55.803 "data_offset": 2048, 00:24:55.803 "data_size": 63488 00:24:55.803 }, 00:24:55.803 { 00:24:55.803 "name": "pt2", 00:24:55.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:55.803 "is_configured": true, 00:24:55.803 "data_offset": 2048, 00:24:55.803 "data_size": 63488 00:24:55.803 } 00:24:55.803 ] 00:24:55.803 }' 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:55.803 13:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:56.368 [2024-10-28 13:36:10.403026] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8576ab63-cbe1-41c9-b76e-e31f22ba246b '!=' 8576ab63-cbe1-41c9-b76e-e31f22ba246b ']' 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76070 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76070 ']' 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76070 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76070 00:24:56.368 killing process with pid 76070 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76070' 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76070 00:24:56.368 [2024-10-28 13:36:10.484379] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:56.368 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76070 00:24:56.368 [2024-10-28 13:36:10.484521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:56.368 [2024-10-28 13:36:10.484590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:56.368 [2024-10-28 13:36:10.484611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:24:56.368 [2024-10-28 13:36:10.513737] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:56.625 ************************************ 00:24:56.625 END TEST raid_superblock_test 00:24:56.625 ************************************ 00:24:56.625 13:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:24:56.625 00:24:56.625 real 0m5.874s 00:24:56.625 user 0m9.938s 00:24:56.625 sys 0m0.965s 00:24:56.625 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:56.625 13:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.882 13:36:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:24:56.882 13:36:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:56.882 13:36:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:56.882 13:36:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:56.882 ************************************ 00:24:56.882 START TEST raid_read_error_test 00:24:56.882 ************************************ 00:24:56.882 13:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:24:56.882 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:24:56.882 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:24:56.882 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:24:56.882 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:56.882 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.d51k6F3ZEz 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76400 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76400 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76400 ']' 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:56.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:56.883 13:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.883 [2024-10-28 13:36:10.932722] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:24:56.883 [2024-10-28 13:36:10.932953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76400 ] 00:24:57.157 [2024-10-28 13:36:11.088682] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:57.157 [2024-10-28 13:36:11.124071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.157 [2024-10-28 13:36:11.194988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.157 [2024-10-28 13:36:11.276545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:57.157 [2024-10-28 13:36:11.276623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:57.800 13:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:57.800 13:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:24:57.800 13:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:57.800 13:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:57.800 13:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.800 13:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.800 BaseBdev1_malloc 00:24:57.800 13:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.059 13:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:58.059 13:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.059 13:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.059 true 00:24:58.059 13:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.059 13:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:58.059 13:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.059 13:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.059 [2024-10-28 13:36:11.976675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:58.059 [2024-10-28 13:36:11.976769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:58.059 [2024-10-28 13:36:11.976810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:58.059 [2024-10-28 13:36:11.976835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:58.059 [2024-10-28 13:36:11.979957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:58.059 [2024-10-28 13:36:11.980008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:58.059 BaseBdev1 00:24:58.059 13:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.059 13:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:58.059 13:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:58.059 13:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.059 13:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.059 BaseBdev2_malloc 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.059 true 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.059 [2024-10-28 13:36:12.024127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:58.059 [2024-10-28 13:36:12.024226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:58.059 [2024-10-28 13:36:12.024257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:58.059 [2024-10-28 13:36:12.024276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:58.059 [2024-10-28 13:36:12.027295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:58.059 [2024-10-28 13:36:12.027347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:58.059 BaseBdev2 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.059 [2024-10-28 13:36:12.036214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:58.059 [2024-10-28 13:36:12.038889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:58.059 [2024-10-28 13:36:12.039181] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:58.059 [2024-10-28 13:36:12.039206] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:58.059 [2024-10-28 13:36:12.039587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:58.059 [2024-10-28 13:36:12.039828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:58.059 [2024-10-28 13:36:12.039855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:58.059 [2024-10-28 13:36:12.040106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.059 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:58.059 "name": "raid_bdev1", 00:24:58.059 "uuid": "6c8a71f0-ca33-4a74-9031-087d62cece3f", 00:24:58.059 "strip_size_kb": 0, 00:24:58.059 "state": "online", 00:24:58.059 "raid_level": "raid1", 00:24:58.059 "superblock": true, 00:24:58.059 "num_base_bdevs": 2, 00:24:58.059 "num_base_bdevs_discovered": 2, 00:24:58.059 "num_base_bdevs_operational": 2, 00:24:58.059 "base_bdevs_list": [ 00:24:58.059 { 00:24:58.059 "name": "BaseBdev1", 00:24:58.059 "uuid": "fad9762a-5fd5-5054-a509-1aa5ccb6cb94", 00:24:58.059 "is_configured": true, 00:24:58.059 "data_offset": 2048, 00:24:58.059 "data_size": 63488 00:24:58.059 }, 00:24:58.060 { 00:24:58.060 "name": "BaseBdev2", 00:24:58.060 "uuid": "eedadb8c-c71e-5c5b-b8a4-c8be9b063f6a", 00:24:58.060 "is_configured": true, 00:24:58.060 "data_offset": 2048, 00:24:58.060 "data_size": 63488 00:24:58.060 } 00:24:58.060 ] 00:24:58.060 }' 00:24:58.060 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:58.060 13:36:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.626 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:58.626 13:36:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:58.626 [2024-10-28 13:36:12.701117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.560 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:59.560 "name": "raid_bdev1", 00:24:59.560 "uuid": "6c8a71f0-ca33-4a74-9031-087d62cece3f", 00:24:59.560 "strip_size_kb": 0, 00:24:59.560 "state": "online", 00:24:59.560 "raid_level": "raid1", 00:24:59.560 "superblock": true, 00:24:59.560 "num_base_bdevs": 2, 00:24:59.560 "num_base_bdevs_discovered": 2, 00:24:59.560 "num_base_bdevs_operational": 2, 00:24:59.560 "base_bdevs_list": [ 00:24:59.560 { 00:24:59.560 "name": "BaseBdev1", 00:24:59.560 "uuid": "fad9762a-5fd5-5054-a509-1aa5ccb6cb94", 00:24:59.560 "is_configured": true, 00:24:59.560 "data_offset": 2048, 00:24:59.560 "data_size": 63488 00:24:59.560 }, 00:24:59.560 { 00:24:59.561 "name": "BaseBdev2", 00:24:59.561 "uuid": "eedadb8c-c71e-5c5b-b8a4-c8be9b063f6a", 00:24:59.561 "is_configured": true, 00:24:59.561 "data_offset": 2048, 00:24:59.561 "data_size": 63488 00:24:59.561 } 00:24:59.561 ] 00:24:59.561 }' 00:24:59.561 13:36:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:59.561 13:36:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.127 [2024-10-28 13:36:14.119253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:00.127 [2024-10-28 13:36:14.119314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:00.127 [2024-10-28 13:36:14.122906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:00.127 [2024-10-28 13:36:14.122988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.127 [2024-10-28 13:36:14.123201] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:00.127 [2024-10-28 13:36:14.123237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:00.127 { 00:25:00.127 "results": [ 00:25:00.127 { 00:25:00.127 "job": "raid_bdev1", 00:25:00.127 "core_mask": "0x1", 00:25:00.127 "workload": "randrw", 00:25:00.127 "percentage": 50, 00:25:00.127 "status": "finished", 00:25:00.127 "queue_depth": 1, 00:25:00.127 "io_size": 131072, 00:25:00.127 "runtime": 1.415653, 00:25:00.127 "iops": 11121.369431633317, 00:25:00.127 "mibps": 1390.1711789541646, 00:25:00.127 "io_failed": 0, 00:25:00.127 "io_timeout": 0, 00:25:00.127 "avg_latency_us": 85.49817442719882, 00:25:00.127 "min_latency_us": 44.68363636363637, 00:25:00.127 "max_latency_us": 1854.370909090909 00:25:00.127 } 00:25:00.127 ], 00:25:00.127 "core_count": 1 00:25:00.127 } 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76400 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76400 ']' 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76400 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76400 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:00.127 killing process with pid 76400 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76400' 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76400 00:25:00.127 [2024-10-28 13:36:14.169843] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:00.127 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76400 00:25:00.127 [2024-10-28 13:36:14.190568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:00.388 13:36:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.d51k6F3ZEz 00:25:00.388 13:36:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:00.388 13:36:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:00.388 13:36:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:25:00.388 13:36:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:25:00.388 13:36:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:00.388 13:36:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:00.388 13:36:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:25:00.388 00:25:00.388 real 0m3.643s 00:25:00.388 user 0m4.776s 00:25:00.388 sys 0m0.633s 00:25:00.388 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:00.388 13:36:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.388 ************************************ 00:25:00.388 END TEST raid_read_error_test 00:25:00.388 ************************************ 00:25:00.388 13:36:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:25:00.388 13:36:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:00.388 13:36:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:00.388 13:36:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:00.388 ************************************ 00:25:00.388 START TEST raid_write_error_test 00:25:00.388 ************************************ 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kvAXWy7iMl 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76529 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76529 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76529 ']' 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:00.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:00.388 13:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.659 [2024-10-28 13:36:14.623063] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:25:00.659 [2024-10-28 13:36:14.623267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76529 ] 00:25:00.659 [2024-10-28 13:36:14.770842] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:00.659 [2024-10-28 13:36:14.801361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.916 [2024-10-28 13:36:14.862214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.916 [2024-10-28 13:36:14.922222] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:00.916 [2024-10-28 13:36:14.922278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.849 BaseBdev1_malloc 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.849 true 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.849 [2024-10-28 13:36:15.696505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:01.849 [2024-10-28 13:36:15.696612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.849 [2024-10-28 13:36:15.696647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:01.849 [2024-10-28 13:36:15.696670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.849 [2024-10-28 13:36:15.699888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.849 [2024-10-28 13:36:15.699955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:01.849 BaseBdev1 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.849 BaseBdev2_malloc 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.849 true 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.849 [2024-10-28 13:36:15.741328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:01.849 [2024-10-28 13:36:15.741419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.849 [2024-10-28 13:36:15.741451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:01.849 [2024-10-28 13:36:15.741470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.849 [2024-10-28 13:36:15.744531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.849 [2024-10-28 13:36:15.744600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:01.849 BaseBdev2 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.849 [2024-10-28 13:36:15.753364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:01.849 [2024-10-28 13:36:15.756077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:01.849 [2024-10-28 13:36:15.756410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:01.849 [2024-10-28 13:36:15.756438] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:01.849 [2024-10-28 13:36:15.756834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:25:01.849 [2024-10-28 13:36:15.757073] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:01.849 [2024-10-28 13:36:15.757100] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:01.849 [2024-10-28 13:36:15.757467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.849 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.849 "name": "raid_bdev1", 00:25:01.849 "uuid": "b1291551-e95f-40d3-8527-61956514c72e", 00:25:01.849 "strip_size_kb": 0, 00:25:01.849 "state": "online", 00:25:01.849 "raid_level": "raid1", 00:25:01.849 "superblock": true, 00:25:01.849 "num_base_bdevs": 2, 00:25:01.849 "num_base_bdevs_discovered": 2, 00:25:01.849 "num_base_bdevs_operational": 2, 00:25:01.849 "base_bdevs_list": [ 00:25:01.849 { 00:25:01.849 "name": "BaseBdev1", 00:25:01.849 "uuid": "5ce3eca9-eddf-5184-8cb1-8c33d9e14327", 00:25:01.849 "is_configured": true, 00:25:01.849 "data_offset": 2048, 00:25:01.849 "data_size": 63488 00:25:01.849 }, 00:25:01.849 { 00:25:01.849 "name": "BaseBdev2", 00:25:01.850 "uuid": "7a61d6b7-95f4-5f86-b179-6a0cc88d4ed9", 00:25:01.850 "is_configured": true, 00:25:01.850 "data_offset": 2048, 00:25:01.850 "data_size": 63488 00:25:01.850 } 00:25:01.850 ] 00:25:01.850 }' 00:25:01.850 13:36:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.850 13:36:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.416 13:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:02.416 13:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:02.416 [2024-10-28 13:36:16.390132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.350 [2024-10-28 13:36:17.293346] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:25:03.350 [2024-10-28 13:36:17.293436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:03.350 [2024-10-28 13:36:17.293690] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000062f0 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.350 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.350 "name": "raid_bdev1", 00:25:03.350 "uuid": "b1291551-e95f-40d3-8527-61956514c72e", 00:25:03.350 "strip_size_kb": 0, 00:25:03.350 "state": "online", 00:25:03.350 "raid_level": "raid1", 00:25:03.350 "superblock": true, 00:25:03.351 "num_base_bdevs": 2, 00:25:03.351 "num_base_bdevs_discovered": 1, 00:25:03.351 "num_base_bdevs_operational": 1, 00:25:03.351 "base_bdevs_list": [ 00:25:03.351 { 00:25:03.351 "name": null, 00:25:03.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.351 "is_configured": false, 00:25:03.351 "data_offset": 0, 00:25:03.351 "data_size": 63488 00:25:03.351 }, 00:25:03.351 { 00:25:03.351 "name": "BaseBdev2", 00:25:03.351 "uuid": "7a61d6b7-95f4-5f86-b179-6a0cc88d4ed9", 00:25:03.351 "is_configured": true, 00:25:03.351 "data_offset": 2048, 00:25:03.351 "data_size": 63488 00:25:03.351 } 00:25:03.351 ] 00:25:03.351 }' 00:25:03.351 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.351 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:03.916 [2024-10-28 13:36:17.822498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:03.916 [2024-10-28 13:36:17.822557] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:03.916 [2024-10-28 13:36:17.826037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:03.916 [2024-10-28 13:36:17.826111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.916 [2024-10-28 13:36:17.826210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:03.916 [2024-10-28 13:36:17.826230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:03.916 { 00:25:03.916 "results": [ 00:25:03.916 { 00:25:03.916 "job": "raid_bdev1", 00:25:03.916 "core_mask": "0x1", 00:25:03.916 "workload": "randrw", 00:25:03.916 "percentage": 50, 00:25:03.916 "status": "finished", 00:25:03.916 "queue_depth": 1, 00:25:03.916 "io_size": 131072, 00:25:03.916 "runtime": 1.429602, 00:25:03.916 "iops": 12468.505220334051, 00:25:03.916 "mibps": 1558.5631525417564, 00:25:03.916 "io_failed": 0, 00:25:03.916 "io_timeout": 0, 00:25:03.916 "avg_latency_us": 75.63005094989163, 00:25:03.916 "min_latency_us": 42.589090909090906, 00:25:03.916 "max_latency_us": 1980.9745454545455 00:25:03.916 } 00:25:03.916 ], 00:25:03.916 "core_count": 1 00:25:03.916 } 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76529 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76529 ']' 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76529 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76529 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:03.916 killing process with pid 76529 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76529' 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76529 00:25:03.916 [2024-10-28 13:36:17.865735] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:03.916 13:36:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76529 00:25:03.916 [2024-10-28 13:36:17.886849] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:04.175 13:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kvAXWy7iMl 00:25:04.175 13:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:04.175 13:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:04.175 13:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:25:04.175 13:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:25:04.175 13:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:04.175 13:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:04.175 13:36:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:25:04.175 00:25:04.175 real 0m3.633s 00:25:04.175 user 0m4.845s 00:25:04.175 sys 0m0.535s 00:25:04.175 13:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:04.175 13:36:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.175 ************************************ 00:25:04.175 END TEST raid_write_error_test 00:25:04.175 ************************************ 00:25:04.175 13:36:18 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:25:04.175 13:36:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:25:04.175 13:36:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:25:04.175 13:36:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:04.175 13:36:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:04.175 13:36:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:04.175 ************************************ 00:25:04.175 START TEST raid_state_function_test 00:25:04.175 ************************************ 00:25:04.175 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:25:04.175 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:25:04.175 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:25:04.175 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:25:04.175 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:04.176 Process raid pid: 76667 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76667 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76667' 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76667 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76667 ']' 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:04.176 13:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:04.176 [2024-10-28 13:36:18.323588] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:25:04.176 [2024-10-28 13:36:18.323805] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.434 [2024-10-28 13:36:18.481675] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:04.434 [2024-10-28 13:36:18.514679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.434 [2024-10-28 13:36:18.590812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.692 [2024-10-28 13:36:18.677952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:04.692 [2024-10-28 13:36:18.677995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:05.258 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.258 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:25:05.258 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.259 [2024-10-28 13:36:19.354292] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:05.259 [2024-10-28 13:36:19.354364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:05.259 [2024-10-28 13:36:19.354387] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:05.259 [2024-10-28 13:36:19.354401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:05.259 [2024-10-28 13:36:19.354420] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:05.259 [2024-10-28 13:36:19.354432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:05.259 "name": "Existed_Raid", 00:25:05.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.259 "strip_size_kb": 64, 00:25:05.259 "state": "configuring", 00:25:05.259 "raid_level": "raid0", 00:25:05.259 "superblock": false, 00:25:05.259 "num_base_bdevs": 3, 00:25:05.259 "num_base_bdevs_discovered": 0, 00:25:05.259 "num_base_bdevs_operational": 3, 00:25:05.259 "base_bdevs_list": [ 00:25:05.259 { 00:25:05.259 "name": "BaseBdev1", 00:25:05.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.259 "is_configured": false, 00:25:05.259 "data_offset": 0, 00:25:05.259 "data_size": 0 00:25:05.259 }, 00:25:05.259 { 00:25:05.259 "name": "BaseBdev2", 00:25:05.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.259 "is_configured": false, 00:25:05.259 "data_offset": 0, 00:25:05.259 "data_size": 0 00:25:05.259 }, 00:25:05.259 { 00:25:05.259 "name": "BaseBdev3", 00:25:05.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.259 "is_configured": false, 00:25:05.259 "data_offset": 0, 00:25:05.259 "data_size": 0 00:25:05.259 } 00:25:05.259 ] 00:25:05.259 }' 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:05.259 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.823 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:05.823 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.823 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.823 [2024-10-28 13:36:19.886337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:05.823 [2024-10-28 13:36:19.886381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:25:05.823 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.823 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:05.823 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.823 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.823 [2024-10-28 13:36:19.894380] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:05.823 [2024-10-28 13:36:19.894439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:05.823 [2024-10-28 13:36:19.894460] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:05.824 [2024-10-28 13:36:19.894473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:05.824 [2024-10-28 13:36:19.894485] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:05.824 [2024-10-28 13:36:19.894497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.824 [2024-10-28 13:36:19.914905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:05.824 BaseBdev1 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.824 [ 00:25:05.824 { 00:25:05.824 "name": "BaseBdev1", 00:25:05.824 "aliases": [ 00:25:05.824 "7edf683b-f064-4e77-bf9c-a38c5a3c62ad" 00:25:05.824 ], 00:25:05.824 "product_name": "Malloc disk", 00:25:05.824 "block_size": 512, 00:25:05.824 "num_blocks": 65536, 00:25:05.824 "uuid": "7edf683b-f064-4e77-bf9c-a38c5a3c62ad", 00:25:05.824 "assigned_rate_limits": { 00:25:05.824 "rw_ios_per_sec": 0, 00:25:05.824 "rw_mbytes_per_sec": 0, 00:25:05.824 "r_mbytes_per_sec": 0, 00:25:05.824 "w_mbytes_per_sec": 0 00:25:05.824 }, 00:25:05.824 "claimed": true, 00:25:05.824 "claim_type": "exclusive_write", 00:25:05.824 "zoned": false, 00:25:05.824 "supported_io_types": { 00:25:05.824 "read": true, 00:25:05.824 "write": true, 00:25:05.824 "unmap": true, 00:25:05.824 "flush": true, 00:25:05.824 "reset": true, 00:25:05.824 "nvme_admin": false, 00:25:05.824 "nvme_io": false, 00:25:05.824 "nvme_io_md": false, 00:25:05.824 "write_zeroes": true, 00:25:05.824 "zcopy": true, 00:25:05.824 "get_zone_info": false, 00:25:05.824 "zone_management": false, 00:25:05.824 "zone_append": false, 00:25:05.824 "compare": false, 00:25:05.824 "compare_and_write": false, 00:25:05.824 "abort": true, 00:25:05.824 "seek_hole": false, 00:25:05.824 "seek_data": false, 00:25:05.824 "copy": true, 00:25:05.824 "nvme_iov_md": false 00:25:05.824 }, 00:25:05.824 "memory_domains": [ 00:25:05.824 { 00:25:05.824 "dma_device_id": "system", 00:25:05.824 "dma_device_type": 1 00:25:05.824 }, 00:25:05.824 { 00:25:05.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.824 "dma_device_type": 2 00:25:05.824 } 00:25:05.824 ], 00:25:05.824 "driver_specific": {} 00:25:05.824 } 00:25:05.824 ] 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.824 13:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.081 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:06.081 "name": "Existed_Raid", 00:25:06.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.081 "strip_size_kb": 64, 00:25:06.081 "state": "configuring", 00:25:06.081 "raid_level": "raid0", 00:25:06.081 "superblock": false, 00:25:06.081 "num_base_bdevs": 3, 00:25:06.081 "num_base_bdevs_discovered": 1, 00:25:06.081 "num_base_bdevs_operational": 3, 00:25:06.081 "base_bdevs_list": [ 00:25:06.081 { 00:25:06.081 "name": "BaseBdev1", 00:25:06.081 "uuid": "7edf683b-f064-4e77-bf9c-a38c5a3c62ad", 00:25:06.081 "is_configured": true, 00:25:06.081 "data_offset": 0, 00:25:06.081 "data_size": 65536 00:25:06.081 }, 00:25:06.081 { 00:25:06.081 "name": "BaseBdev2", 00:25:06.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.081 "is_configured": false, 00:25:06.081 "data_offset": 0, 00:25:06.081 "data_size": 0 00:25:06.081 }, 00:25:06.081 { 00:25:06.081 "name": "BaseBdev3", 00:25:06.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.081 "is_configured": false, 00:25:06.081 "data_offset": 0, 00:25:06.081 "data_size": 0 00:25:06.081 } 00:25:06.081 ] 00:25:06.081 }' 00:25:06.081 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:06.081 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.340 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:06.340 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.340 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.599 [2024-10-28 13:36:20.499233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:06.599 [2024-10-28 13:36:20.499322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.599 [2024-10-28 13:36:20.507239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:06.599 [2024-10-28 13:36:20.510402] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:06.599 [2024-10-28 13:36:20.510491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:06.599 [2024-10-28 13:36:20.510529] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:06.599 [2024-10-28 13:36:20.510552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:06.599 "name": "Existed_Raid", 00:25:06.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.599 "strip_size_kb": 64, 00:25:06.599 "state": "configuring", 00:25:06.599 "raid_level": "raid0", 00:25:06.599 "superblock": false, 00:25:06.599 "num_base_bdevs": 3, 00:25:06.599 "num_base_bdevs_discovered": 1, 00:25:06.599 "num_base_bdevs_operational": 3, 00:25:06.599 "base_bdevs_list": [ 00:25:06.599 { 00:25:06.599 "name": "BaseBdev1", 00:25:06.599 "uuid": "7edf683b-f064-4e77-bf9c-a38c5a3c62ad", 00:25:06.599 "is_configured": true, 00:25:06.599 "data_offset": 0, 00:25:06.599 "data_size": 65536 00:25:06.599 }, 00:25:06.599 { 00:25:06.599 "name": "BaseBdev2", 00:25:06.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.599 "is_configured": false, 00:25:06.599 "data_offset": 0, 00:25:06.599 "data_size": 0 00:25:06.599 }, 00:25:06.599 { 00:25:06.599 "name": "BaseBdev3", 00:25:06.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.599 "is_configured": false, 00:25:06.599 "data_offset": 0, 00:25:06.599 "data_size": 0 00:25:06.599 } 00:25:06.599 ] 00:25:06.599 }' 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:06.599 13:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.167 [2024-10-28 13:36:21.088891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:07.167 BaseBdev2 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.167 [ 00:25:07.167 { 00:25:07.167 "name": "BaseBdev2", 00:25:07.167 "aliases": [ 00:25:07.167 "8edbda0b-e493-4be6-abf8-fc34d2f2a427" 00:25:07.167 ], 00:25:07.167 "product_name": "Malloc disk", 00:25:07.167 "block_size": 512, 00:25:07.167 "num_blocks": 65536, 00:25:07.167 "uuid": "8edbda0b-e493-4be6-abf8-fc34d2f2a427", 00:25:07.167 "assigned_rate_limits": { 00:25:07.167 "rw_ios_per_sec": 0, 00:25:07.167 "rw_mbytes_per_sec": 0, 00:25:07.167 "r_mbytes_per_sec": 0, 00:25:07.167 "w_mbytes_per_sec": 0 00:25:07.167 }, 00:25:07.167 "claimed": true, 00:25:07.167 "claim_type": "exclusive_write", 00:25:07.167 "zoned": false, 00:25:07.167 "supported_io_types": { 00:25:07.167 "read": true, 00:25:07.167 "write": true, 00:25:07.167 "unmap": true, 00:25:07.167 "flush": true, 00:25:07.167 "reset": true, 00:25:07.167 "nvme_admin": false, 00:25:07.167 "nvme_io": false, 00:25:07.167 "nvme_io_md": false, 00:25:07.167 "write_zeroes": true, 00:25:07.167 "zcopy": true, 00:25:07.167 "get_zone_info": false, 00:25:07.167 "zone_management": false, 00:25:07.167 "zone_append": false, 00:25:07.167 "compare": false, 00:25:07.167 "compare_and_write": false, 00:25:07.167 "abort": true, 00:25:07.167 "seek_hole": false, 00:25:07.167 "seek_data": false, 00:25:07.167 "copy": true, 00:25:07.167 "nvme_iov_md": false 00:25:07.167 }, 00:25:07.167 "memory_domains": [ 00:25:07.167 { 00:25:07.167 "dma_device_id": "system", 00:25:07.167 "dma_device_type": 1 00:25:07.167 }, 00:25:07.167 { 00:25:07.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.167 "dma_device_type": 2 00:25:07.167 } 00:25:07.167 ], 00:25:07.167 "driver_specific": {} 00:25:07.167 } 00:25:07.167 ] 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.167 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.168 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.168 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.168 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.168 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.168 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.168 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.168 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.168 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.168 "name": "Existed_Raid", 00:25:07.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.168 "strip_size_kb": 64, 00:25:07.168 "state": "configuring", 00:25:07.168 "raid_level": "raid0", 00:25:07.168 "superblock": false, 00:25:07.168 "num_base_bdevs": 3, 00:25:07.168 "num_base_bdevs_discovered": 2, 00:25:07.168 "num_base_bdevs_operational": 3, 00:25:07.168 "base_bdevs_list": [ 00:25:07.168 { 00:25:07.168 "name": "BaseBdev1", 00:25:07.168 "uuid": "7edf683b-f064-4e77-bf9c-a38c5a3c62ad", 00:25:07.168 "is_configured": true, 00:25:07.168 "data_offset": 0, 00:25:07.168 "data_size": 65536 00:25:07.168 }, 00:25:07.168 { 00:25:07.168 "name": "BaseBdev2", 00:25:07.168 "uuid": "8edbda0b-e493-4be6-abf8-fc34d2f2a427", 00:25:07.168 "is_configured": true, 00:25:07.168 "data_offset": 0, 00:25:07.168 "data_size": 65536 00:25:07.168 }, 00:25:07.168 { 00:25:07.168 "name": "BaseBdev3", 00:25:07.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.168 "is_configured": false, 00:25:07.168 "data_offset": 0, 00:25:07.168 "data_size": 0 00:25:07.168 } 00:25:07.168 ] 00:25:07.168 }' 00:25:07.168 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.168 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.736 [2024-10-28 13:36:21.715707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:07.736 [2024-10-28 13:36:21.715812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:07.736 [2024-10-28 13:36:21.715845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:07.736 [2024-10-28 13:36:21.716569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:07.736 [2024-10-28 13:36:21.717023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:07.736 [2024-10-28 13:36:21.717087] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:25:07.736 [2024-10-28 13:36:21.717578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.736 BaseBdev3 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.736 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.736 [ 00:25:07.736 { 00:25:07.736 "name": "BaseBdev3", 00:25:07.736 "aliases": [ 00:25:07.736 "be53d2be-c4f7-4248-b741-c45eb436b6b1" 00:25:07.736 ], 00:25:07.736 "product_name": "Malloc disk", 00:25:07.736 "block_size": 512, 00:25:07.736 "num_blocks": 65536, 00:25:07.736 "uuid": "be53d2be-c4f7-4248-b741-c45eb436b6b1", 00:25:07.736 "assigned_rate_limits": { 00:25:07.736 "rw_ios_per_sec": 0, 00:25:07.736 "rw_mbytes_per_sec": 0, 00:25:07.736 "r_mbytes_per_sec": 0, 00:25:07.736 "w_mbytes_per_sec": 0 00:25:07.736 }, 00:25:07.736 "claimed": true, 00:25:07.736 "claim_type": "exclusive_write", 00:25:07.736 "zoned": false, 00:25:07.736 "supported_io_types": { 00:25:07.736 "read": true, 00:25:07.736 "write": true, 00:25:07.736 "unmap": true, 00:25:07.736 "flush": true, 00:25:07.736 "reset": true, 00:25:07.736 "nvme_admin": false, 00:25:07.736 "nvme_io": false, 00:25:07.736 "nvme_io_md": false, 00:25:07.736 "write_zeroes": true, 00:25:07.736 "zcopy": true, 00:25:07.736 "get_zone_info": false, 00:25:07.736 "zone_management": false, 00:25:07.736 "zone_append": false, 00:25:07.736 "compare": false, 00:25:07.736 "compare_and_write": false, 00:25:07.736 "abort": true, 00:25:07.736 "seek_hole": false, 00:25:07.736 "seek_data": false, 00:25:07.737 "copy": true, 00:25:07.737 "nvme_iov_md": false 00:25:07.737 }, 00:25:07.737 "memory_domains": [ 00:25:07.737 { 00:25:07.737 "dma_device_id": "system", 00:25:07.737 "dma_device_type": 1 00:25:07.737 }, 00:25:07.737 { 00:25:07.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.737 "dma_device_type": 2 00:25:07.737 } 00:25:07.737 ], 00:25:07.737 "driver_specific": {} 00:25:07.737 } 00:25:07.737 ] 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.737 "name": "Existed_Raid", 00:25:07.737 "uuid": "36ed210f-e520-4256-b9b5-e3ef8df0ba71", 00:25:07.737 "strip_size_kb": 64, 00:25:07.737 "state": "online", 00:25:07.737 "raid_level": "raid0", 00:25:07.737 "superblock": false, 00:25:07.737 "num_base_bdevs": 3, 00:25:07.737 "num_base_bdevs_discovered": 3, 00:25:07.737 "num_base_bdevs_operational": 3, 00:25:07.737 "base_bdevs_list": [ 00:25:07.737 { 00:25:07.737 "name": "BaseBdev1", 00:25:07.737 "uuid": "7edf683b-f064-4e77-bf9c-a38c5a3c62ad", 00:25:07.737 "is_configured": true, 00:25:07.737 "data_offset": 0, 00:25:07.737 "data_size": 65536 00:25:07.737 }, 00:25:07.737 { 00:25:07.737 "name": "BaseBdev2", 00:25:07.737 "uuid": "8edbda0b-e493-4be6-abf8-fc34d2f2a427", 00:25:07.737 "is_configured": true, 00:25:07.737 "data_offset": 0, 00:25:07.737 "data_size": 65536 00:25:07.737 }, 00:25:07.737 { 00:25:07.737 "name": "BaseBdev3", 00:25:07.737 "uuid": "be53d2be-c4f7-4248-b741-c45eb436b6b1", 00:25:07.737 "is_configured": true, 00:25:07.737 "data_offset": 0, 00:25:07.737 "data_size": 65536 00:25:07.737 } 00:25:07.737 ] 00:25:07.737 }' 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.737 13:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.304 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:08.304 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:08.304 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:08.304 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:08.304 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:08.304 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:08.304 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:08.304 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:08.304 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.304 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.304 [2024-10-28 13:36:22.312457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:08.304 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.304 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:08.304 "name": "Existed_Raid", 00:25:08.304 "aliases": [ 00:25:08.304 "36ed210f-e520-4256-b9b5-e3ef8df0ba71" 00:25:08.304 ], 00:25:08.304 "product_name": "Raid Volume", 00:25:08.304 "block_size": 512, 00:25:08.304 "num_blocks": 196608, 00:25:08.304 "uuid": "36ed210f-e520-4256-b9b5-e3ef8df0ba71", 00:25:08.304 "assigned_rate_limits": { 00:25:08.304 "rw_ios_per_sec": 0, 00:25:08.304 "rw_mbytes_per_sec": 0, 00:25:08.304 "r_mbytes_per_sec": 0, 00:25:08.304 "w_mbytes_per_sec": 0 00:25:08.304 }, 00:25:08.304 "claimed": false, 00:25:08.304 "zoned": false, 00:25:08.304 "supported_io_types": { 00:25:08.304 "read": true, 00:25:08.304 "write": true, 00:25:08.304 "unmap": true, 00:25:08.304 "flush": true, 00:25:08.304 "reset": true, 00:25:08.304 "nvme_admin": false, 00:25:08.304 "nvme_io": false, 00:25:08.304 "nvme_io_md": false, 00:25:08.304 "write_zeroes": true, 00:25:08.304 "zcopy": false, 00:25:08.304 "get_zone_info": false, 00:25:08.304 "zone_management": false, 00:25:08.304 "zone_append": false, 00:25:08.304 "compare": false, 00:25:08.304 "compare_and_write": false, 00:25:08.304 "abort": false, 00:25:08.304 "seek_hole": false, 00:25:08.304 "seek_data": false, 00:25:08.304 "copy": false, 00:25:08.304 "nvme_iov_md": false 00:25:08.304 }, 00:25:08.304 "memory_domains": [ 00:25:08.304 { 00:25:08.304 "dma_device_id": "system", 00:25:08.304 "dma_device_type": 1 00:25:08.304 }, 00:25:08.304 { 00:25:08.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.304 "dma_device_type": 2 00:25:08.304 }, 00:25:08.304 { 00:25:08.304 "dma_device_id": "system", 00:25:08.304 "dma_device_type": 1 00:25:08.304 }, 00:25:08.304 { 00:25:08.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.304 "dma_device_type": 2 00:25:08.304 }, 00:25:08.304 { 00:25:08.304 "dma_device_id": "system", 00:25:08.304 "dma_device_type": 1 00:25:08.304 }, 00:25:08.304 { 00:25:08.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.304 "dma_device_type": 2 00:25:08.304 } 00:25:08.304 ], 00:25:08.304 "driver_specific": { 00:25:08.304 "raid": { 00:25:08.304 "uuid": "36ed210f-e520-4256-b9b5-e3ef8df0ba71", 00:25:08.304 "strip_size_kb": 64, 00:25:08.304 "state": "online", 00:25:08.304 "raid_level": "raid0", 00:25:08.304 "superblock": false, 00:25:08.304 "num_base_bdevs": 3, 00:25:08.304 "num_base_bdevs_discovered": 3, 00:25:08.304 "num_base_bdevs_operational": 3, 00:25:08.304 "base_bdevs_list": [ 00:25:08.304 { 00:25:08.304 "name": "BaseBdev1", 00:25:08.304 "uuid": "7edf683b-f064-4e77-bf9c-a38c5a3c62ad", 00:25:08.304 "is_configured": true, 00:25:08.304 "data_offset": 0, 00:25:08.304 "data_size": 65536 00:25:08.304 }, 00:25:08.304 { 00:25:08.304 "name": "BaseBdev2", 00:25:08.304 "uuid": "8edbda0b-e493-4be6-abf8-fc34d2f2a427", 00:25:08.304 "is_configured": true, 00:25:08.304 "data_offset": 0, 00:25:08.304 "data_size": 65536 00:25:08.304 }, 00:25:08.304 { 00:25:08.304 "name": "BaseBdev3", 00:25:08.304 "uuid": "be53d2be-c4f7-4248-b741-c45eb436b6b1", 00:25:08.304 "is_configured": true, 00:25:08.304 "data_offset": 0, 00:25:08.305 "data_size": 65536 00:25:08.305 } 00:25:08.305 ] 00:25:08.305 } 00:25:08.305 } 00:25:08.305 }' 00:25:08.305 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:08.305 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:08.305 BaseBdev2 00:25:08.305 BaseBdev3' 00:25:08.305 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.564 [2024-10-28 13:36:22.648255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:08.564 [2024-10-28 13:36:22.648310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:08.564 [2024-10-28 13:36:22.648392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.564 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.822 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.822 "name": "Existed_Raid", 00:25:08.822 "uuid": "36ed210f-e520-4256-b9b5-e3ef8df0ba71", 00:25:08.822 "strip_size_kb": 64, 00:25:08.822 "state": "offline", 00:25:08.822 "raid_level": "raid0", 00:25:08.822 "superblock": false, 00:25:08.822 "num_base_bdevs": 3, 00:25:08.822 "num_base_bdevs_discovered": 2, 00:25:08.822 "num_base_bdevs_operational": 2, 00:25:08.822 "base_bdevs_list": [ 00:25:08.822 { 00:25:08.822 "name": null, 00:25:08.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.822 "is_configured": false, 00:25:08.822 "data_offset": 0, 00:25:08.822 "data_size": 65536 00:25:08.822 }, 00:25:08.822 { 00:25:08.822 "name": "BaseBdev2", 00:25:08.822 "uuid": "8edbda0b-e493-4be6-abf8-fc34d2f2a427", 00:25:08.822 "is_configured": true, 00:25:08.822 "data_offset": 0, 00:25:08.822 "data_size": 65536 00:25:08.822 }, 00:25:08.822 { 00:25:08.823 "name": "BaseBdev3", 00:25:08.823 "uuid": "be53d2be-c4f7-4248-b741-c45eb436b6b1", 00:25:08.823 "is_configured": true, 00:25:08.823 "data_offset": 0, 00:25:08.823 "data_size": 65536 00:25:08.823 } 00:25:08.823 ] 00:25:08.823 }' 00:25:08.823 13:36:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.823 13:36:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.390 [2024-10-28 13:36:23.324101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:09.390 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.391 [2024-10-28 13:36:23.411977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:09.391 [2024-10-28 13:36:23.412063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.391 BaseBdev2 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.391 [ 00:25:09.391 { 00:25:09.391 "name": "BaseBdev2", 00:25:09.391 "aliases": [ 00:25:09.391 "46f651c0-d10c-41d9-bca7-ecf38e66545b" 00:25:09.391 ], 00:25:09.391 "product_name": "Malloc disk", 00:25:09.391 "block_size": 512, 00:25:09.391 "num_blocks": 65536, 00:25:09.391 "uuid": "46f651c0-d10c-41d9-bca7-ecf38e66545b", 00:25:09.391 "assigned_rate_limits": { 00:25:09.391 "rw_ios_per_sec": 0, 00:25:09.391 "rw_mbytes_per_sec": 0, 00:25:09.391 "r_mbytes_per_sec": 0, 00:25:09.391 "w_mbytes_per_sec": 0 00:25:09.391 }, 00:25:09.391 "claimed": false, 00:25:09.391 "zoned": false, 00:25:09.391 "supported_io_types": { 00:25:09.391 "read": true, 00:25:09.391 "write": true, 00:25:09.391 "unmap": true, 00:25:09.391 "flush": true, 00:25:09.391 "reset": true, 00:25:09.391 "nvme_admin": false, 00:25:09.391 "nvme_io": false, 00:25:09.391 "nvme_io_md": false, 00:25:09.391 "write_zeroes": true, 00:25:09.391 "zcopy": true, 00:25:09.391 "get_zone_info": false, 00:25:09.391 "zone_management": false, 00:25:09.391 "zone_append": false, 00:25:09.391 "compare": false, 00:25:09.391 "compare_and_write": false, 00:25:09.391 "abort": true, 00:25:09.391 "seek_hole": false, 00:25:09.391 "seek_data": false, 00:25:09.391 "copy": true, 00:25:09.391 "nvme_iov_md": false 00:25:09.391 }, 00:25:09.391 "memory_domains": [ 00:25:09.391 { 00:25:09.391 "dma_device_id": "system", 00:25:09.391 "dma_device_type": 1 00:25:09.391 }, 00:25:09.391 { 00:25:09.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.391 "dma_device_type": 2 00:25:09.391 } 00:25:09.391 ], 00:25:09.391 "driver_specific": {} 00:25:09.391 } 00:25:09.391 ] 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.391 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.650 BaseBdev3 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.650 [ 00:25:09.650 { 00:25:09.650 "name": "BaseBdev3", 00:25:09.650 "aliases": [ 00:25:09.650 "32432254-72bd-4e82-9fd9-46f4f59a3b50" 00:25:09.650 ], 00:25:09.650 "product_name": "Malloc disk", 00:25:09.650 "block_size": 512, 00:25:09.650 "num_blocks": 65536, 00:25:09.650 "uuid": "32432254-72bd-4e82-9fd9-46f4f59a3b50", 00:25:09.650 "assigned_rate_limits": { 00:25:09.650 "rw_ios_per_sec": 0, 00:25:09.650 "rw_mbytes_per_sec": 0, 00:25:09.650 "r_mbytes_per_sec": 0, 00:25:09.650 "w_mbytes_per_sec": 0 00:25:09.650 }, 00:25:09.650 "claimed": false, 00:25:09.650 "zoned": false, 00:25:09.650 "supported_io_types": { 00:25:09.650 "read": true, 00:25:09.650 "write": true, 00:25:09.650 "unmap": true, 00:25:09.650 "flush": true, 00:25:09.650 "reset": true, 00:25:09.650 "nvme_admin": false, 00:25:09.650 "nvme_io": false, 00:25:09.650 "nvme_io_md": false, 00:25:09.650 "write_zeroes": true, 00:25:09.650 "zcopy": true, 00:25:09.650 "get_zone_info": false, 00:25:09.650 "zone_management": false, 00:25:09.650 "zone_append": false, 00:25:09.650 "compare": false, 00:25:09.650 "compare_and_write": false, 00:25:09.650 "abort": true, 00:25:09.650 "seek_hole": false, 00:25:09.650 "seek_data": false, 00:25:09.650 "copy": true, 00:25:09.650 "nvme_iov_md": false 00:25:09.650 }, 00:25:09.650 "memory_domains": [ 00:25:09.650 { 00:25:09.650 "dma_device_id": "system", 00:25:09.650 "dma_device_type": 1 00:25:09.650 }, 00:25:09.650 { 00:25:09.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.650 "dma_device_type": 2 00:25:09.650 } 00:25:09.650 ], 00:25:09.650 "driver_specific": {} 00:25:09.650 } 00:25:09.650 ] 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.650 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.650 [2024-10-28 13:36:23.596220] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:09.650 [2024-10-28 13:36:23.596290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:09.650 [2024-10-28 13:36:23.596326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:09.651 [2024-10-28 13:36:23.599297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:09.651 "name": "Existed_Raid", 00:25:09.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.651 "strip_size_kb": 64, 00:25:09.651 "state": "configuring", 00:25:09.651 "raid_level": "raid0", 00:25:09.651 "superblock": false, 00:25:09.651 "num_base_bdevs": 3, 00:25:09.651 "num_base_bdevs_discovered": 2, 00:25:09.651 "num_base_bdevs_operational": 3, 00:25:09.651 "base_bdevs_list": [ 00:25:09.651 { 00:25:09.651 "name": "BaseBdev1", 00:25:09.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.651 "is_configured": false, 00:25:09.651 "data_offset": 0, 00:25:09.651 "data_size": 0 00:25:09.651 }, 00:25:09.651 { 00:25:09.651 "name": "BaseBdev2", 00:25:09.651 "uuid": "46f651c0-d10c-41d9-bca7-ecf38e66545b", 00:25:09.651 "is_configured": true, 00:25:09.651 "data_offset": 0, 00:25:09.651 "data_size": 65536 00:25:09.651 }, 00:25:09.651 { 00:25:09.651 "name": "BaseBdev3", 00:25:09.651 "uuid": "32432254-72bd-4e82-9fd9-46f4f59a3b50", 00:25:09.651 "is_configured": true, 00:25:09.651 "data_offset": 0, 00:25:09.651 "data_size": 65536 00:25:09.651 } 00:25:09.651 ] 00:25:09.651 }' 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:09.651 13:36:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.217 [2024-10-28 13:36:24.116344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.217 "name": "Existed_Raid", 00:25:10.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.217 "strip_size_kb": 64, 00:25:10.217 "state": "configuring", 00:25:10.217 "raid_level": "raid0", 00:25:10.217 "superblock": false, 00:25:10.217 "num_base_bdevs": 3, 00:25:10.217 "num_base_bdevs_discovered": 1, 00:25:10.217 "num_base_bdevs_operational": 3, 00:25:10.217 "base_bdevs_list": [ 00:25:10.217 { 00:25:10.217 "name": "BaseBdev1", 00:25:10.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.217 "is_configured": false, 00:25:10.217 "data_offset": 0, 00:25:10.217 "data_size": 0 00:25:10.217 }, 00:25:10.217 { 00:25:10.217 "name": null, 00:25:10.217 "uuid": "46f651c0-d10c-41d9-bca7-ecf38e66545b", 00:25:10.217 "is_configured": false, 00:25:10.217 "data_offset": 0, 00:25:10.217 "data_size": 65536 00:25:10.217 }, 00:25:10.217 { 00:25:10.217 "name": "BaseBdev3", 00:25:10.217 "uuid": "32432254-72bd-4e82-9fd9-46f4f59a3b50", 00:25:10.217 "is_configured": true, 00:25:10.217 "data_offset": 0, 00:25:10.217 "data_size": 65536 00:25:10.217 } 00:25:10.217 ] 00:25:10.217 }' 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.217 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.531 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.531 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.531 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.531 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.820 [2024-10-28 13:36:24.722605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:10.820 BaseBdev1 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.820 [ 00:25:10.820 { 00:25:10.820 "name": "BaseBdev1", 00:25:10.820 "aliases": [ 00:25:10.820 "6958225e-84a2-48fd-928c-b528d21df480" 00:25:10.820 ], 00:25:10.820 "product_name": "Malloc disk", 00:25:10.820 "block_size": 512, 00:25:10.820 "num_blocks": 65536, 00:25:10.820 "uuid": "6958225e-84a2-48fd-928c-b528d21df480", 00:25:10.820 "assigned_rate_limits": { 00:25:10.820 "rw_ios_per_sec": 0, 00:25:10.820 "rw_mbytes_per_sec": 0, 00:25:10.820 "r_mbytes_per_sec": 0, 00:25:10.820 "w_mbytes_per_sec": 0 00:25:10.820 }, 00:25:10.820 "claimed": true, 00:25:10.820 "claim_type": "exclusive_write", 00:25:10.820 "zoned": false, 00:25:10.820 "supported_io_types": { 00:25:10.820 "read": true, 00:25:10.820 "write": true, 00:25:10.820 "unmap": true, 00:25:10.820 "flush": true, 00:25:10.820 "reset": true, 00:25:10.820 "nvme_admin": false, 00:25:10.820 "nvme_io": false, 00:25:10.820 "nvme_io_md": false, 00:25:10.820 "write_zeroes": true, 00:25:10.820 "zcopy": true, 00:25:10.820 "get_zone_info": false, 00:25:10.820 "zone_management": false, 00:25:10.820 "zone_append": false, 00:25:10.820 "compare": false, 00:25:10.820 "compare_and_write": false, 00:25:10.820 "abort": true, 00:25:10.820 "seek_hole": false, 00:25:10.820 "seek_data": false, 00:25:10.820 "copy": true, 00:25:10.820 "nvme_iov_md": false 00:25:10.820 }, 00:25:10.820 "memory_domains": [ 00:25:10.820 { 00:25:10.820 "dma_device_id": "system", 00:25:10.820 "dma_device_type": 1 00:25:10.820 }, 00:25:10.820 { 00:25:10.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.820 "dma_device_type": 2 00:25:10.820 } 00:25:10.820 ], 00:25:10.820 "driver_specific": {} 00:25:10.820 } 00:25:10.820 ] 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.820 "name": "Existed_Raid", 00:25:10.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.820 "strip_size_kb": 64, 00:25:10.820 "state": "configuring", 00:25:10.820 "raid_level": "raid0", 00:25:10.820 "superblock": false, 00:25:10.820 "num_base_bdevs": 3, 00:25:10.820 "num_base_bdevs_discovered": 2, 00:25:10.820 "num_base_bdevs_operational": 3, 00:25:10.820 "base_bdevs_list": [ 00:25:10.820 { 00:25:10.820 "name": "BaseBdev1", 00:25:10.820 "uuid": "6958225e-84a2-48fd-928c-b528d21df480", 00:25:10.820 "is_configured": true, 00:25:10.820 "data_offset": 0, 00:25:10.820 "data_size": 65536 00:25:10.820 }, 00:25:10.820 { 00:25:10.820 "name": null, 00:25:10.820 "uuid": "46f651c0-d10c-41d9-bca7-ecf38e66545b", 00:25:10.820 "is_configured": false, 00:25:10.820 "data_offset": 0, 00:25:10.820 "data_size": 65536 00:25:10.820 }, 00:25:10.820 { 00:25:10.820 "name": "BaseBdev3", 00:25:10.820 "uuid": "32432254-72bd-4e82-9fd9-46f4f59a3b50", 00:25:10.820 "is_configured": true, 00:25:10.820 "data_offset": 0, 00:25:10.820 "data_size": 65536 00:25:10.820 } 00:25:10.820 ] 00:25:10.820 }' 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.820 13:36:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.387 [2024-10-28 13:36:25.350976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:11.387 "name": "Existed_Raid", 00:25:11.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.387 "strip_size_kb": 64, 00:25:11.387 "state": "configuring", 00:25:11.387 "raid_level": "raid0", 00:25:11.387 "superblock": false, 00:25:11.387 "num_base_bdevs": 3, 00:25:11.387 "num_base_bdevs_discovered": 1, 00:25:11.387 "num_base_bdevs_operational": 3, 00:25:11.387 "base_bdevs_list": [ 00:25:11.387 { 00:25:11.387 "name": "BaseBdev1", 00:25:11.387 "uuid": "6958225e-84a2-48fd-928c-b528d21df480", 00:25:11.387 "is_configured": true, 00:25:11.387 "data_offset": 0, 00:25:11.387 "data_size": 65536 00:25:11.387 }, 00:25:11.387 { 00:25:11.387 "name": null, 00:25:11.387 "uuid": "46f651c0-d10c-41d9-bca7-ecf38e66545b", 00:25:11.387 "is_configured": false, 00:25:11.387 "data_offset": 0, 00:25:11.387 "data_size": 65536 00:25:11.387 }, 00:25:11.387 { 00:25:11.387 "name": null, 00:25:11.387 "uuid": "32432254-72bd-4e82-9fd9-46f4f59a3b50", 00:25:11.387 "is_configured": false, 00:25:11.387 "data_offset": 0, 00:25:11.387 "data_size": 65536 00:25:11.387 } 00:25:11.387 ] 00:25:11.387 }' 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:11.387 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.953 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.953 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.953 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.953 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:11.953 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.953 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:11.953 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.954 [2024-10-28 13:36:25.943220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.954 13:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.954 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:11.954 "name": "Existed_Raid", 00:25:11.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.954 "strip_size_kb": 64, 00:25:11.954 "state": "configuring", 00:25:11.954 "raid_level": "raid0", 00:25:11.954 "superblock": false, 00:25:11.954 "num_base_bdevs": 3, 00:25:11.954 "num_base_bdevs_discovered": 2, 00:25:11.954 "num_base_bdevs_operational": 3, 00:25:11.954 "base_bdevs_list": [ 00:25:11.954 { 00:25:11.954 "name": "BaseBdev1", 00:25:11.954 "uuid": "6958225e-84a2-48fd-928c-b528d21df480", 00:25:11.954 "is_configured": true, 00:25:11.954 "data_offset": 0, 00:25:11.954 "data_size": 65536 00:25:11.954 }, 00:25:11.954 { 00:25:11.954 "name": null, 00:25:11.954 "uuid": "46f651c0-d10c-41d9-bca7-ecf38e66545b", 00:25:11.954 "is_configured": false, 00:25:11.954 "data_offset": 0, 00:25:11.954 "data_size": 65536 00:25:11.954 }, 00:25:11.954 { 00:25:11.954 "name": "BaseBdev3", 00:25:11.954 "uuid": "32432254-72bd-4e82-9fd9-46f4f59a3b50", 00:25:11.954 "is_configured": true, 00:25:11.954 "data_offset": 0, 00:25:11.954 "data_size": 65536 00:25:11.954 } 00:25:11.954 ] 00:25:11.954 }' 00:25:11.954 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:11.954 13:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.520 [2024-10-28 13:36:26.527466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:12.520 "name": "Existed_Raid", 00:25:12.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.520 "strip_size_kb": 64, 00:25:12.520 "state": "configuring", 00:25:12.520 "raid_level": "raid0", 00:25:12.520 "superblock": false, 00:25:12.520 "num_base_bdevs": 3, 00:25:12.520 "num_base_bdevs_discovered": 1, 00:25:12.520 "num_base_bdevs_operational": 3, 00:25:12.520 "base_bdevs_list": [ 00:25:12.520 { 00:25:12.520 "name": null, 00:25:12.520 "uuid": "6958225e-84a2-48fd-928c-b528d21df480", 00:25:12.520 "is_configured": false, 00:25:12.520 "data_offset": 0, 00:25:12.520 "data_size": 65536 00:25:12.520 }, 00:25:12.520 { 00:25:12.520 "name": null, 00:25:12.520 "uuid": "46f651c0-d10c-41d9-bca7-ecf38e66545b", 00:25:12.520 "is_configured": false, 00:25:12.520 "data_offset": 0, 00:25:12.520 "data_size": 65536 00:25:12.520 }, 00:25:12.520 { 00:25:12.520 "name": "BaseBdev3", 00:25:12.520 "uuid": "32432254-72bd-4e82-9fd9-46f4f59a3b50", 00:25:12.520 "is_configured": true, 00:25:12.520 "data_offset": 0, 00:25:12.520 "data_size": 65536 00:25:12.520 } 00:25:12.520 ] 00:25:12.520 }' 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:12.520 13:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.086 [2024-10-28 13:36:27.126453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.086 "name": "Existed_Raid", 00:25:13.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.086 "strip_size_kb": 64, 00:25:13.086 "state": "configuring", 00:25:13.086 "raid_level": "raid0", 00:25:13.086 "superblock": false, 00:25:13.086 "num_base_bdevs": 3, 00:25:13.086 "num_base_bdevs_discovered": 2, 00:25:13.086 "num_base_bdevs_operational": 3, 00:25:13.086 "base_bdevs_list": [ 00:25:13.086 { 00:25:13.086 "name": null, 00:25:13.086 "uuid": "6958225e-84a2-48fd-928c-b528d21df480", 00:25:13.086 "is_configured": false, 00:25:13.086 "data_offset": 0, 00:25:13.086 "data_size": 65536 00:25:13.086 }, 00:25:13.086 { 00:25:13.086 "name": "BaseBdev2", 00:25:13.086 "uuid": "46f651c0-d10c-41d9-bca7-ecf38e66545b", 00:25:13.086 "is_configured": true, 00:25:13.086 "data_offset": 0, 00:25:13.086 "data_size": 65536 00:25:13.086 }, 00:25:13.086 { 00:25:13.086 "name": "BaseBdev3", 00:25:13.086 "uuid": "32432254-72bd-4e82-9fd9-46f4f59a3b50", 00:25:13.086 "is_configured": true, 00:25:13.086 "data_offset": 0, 00:25:13.086 "data_size": 65536 00:25:13.086 } 00:25:13.086 ] 00:25:13.086 }' 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.086 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6958225e-84a2-48fd-928c-b528d21df480 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.652 [2024-10-28 13:36:27.755438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:13.652 [2024-10-28 13:36:27.755540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:13.652 [2024-10-28 13:36:27.755558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:13.652 [2024-10-28 13:36:27.755916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:25:13.652 [2024-10-28 13:36:27.756099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:13.652 [2024-10-28 13:36:27.756128] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:13.652 [2024-10-28 13:36:27.756441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:13.652 NewBaseBdev 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.652 [ 00:25:13.652 { 00:25:13.652 "name": "NewBaseBdev", 00:25:13.652 "aliases": [ 00:25:13.652 "6958225e-84a2-48fd-928c-b528d21df480" 00:25:13.652 ], 00:25:13.652 "product_name": "Malloc disk", 00:25:13.652 "block_size": 512, 00:25:13.652 "num_blocks": 65536, 00:25:13.652 "uuid": "6958225e-84a2-48fd-928c-b528d21df480", 00:25:13.652 "assigned_rate_limits": { 00:25:13.652 "rw_ios_per_sec": 0, 00:25:13.652 "rw_mbytes_per_sec": 0, 00:25:13.652 "r_mbytes_per_sec": 0, 00:25:13.652 "w_mbytes_per_sec": 0 00:25:13.652 }, 00:25:13.652 "claimed": true, 00:25:13.652 "claim_type": "exclusive_write", 00:25:13.652 "zoned": false, 00:25:13.652 "supported_io_types": { 00:25:13.652 "read": true, 00:25:13.652 "write": true, 00:25:13.652 "unmap": true, 00:25:13.652 "flush": true, 00:25:13.652 "reset": true, 00:25:13.652 "nvme_admin": false, 00:25:13.652 "nvme_io": false, 00:25:13.652 "nvme_io_md": false, 00:25:13.652 "write_zeroes": true, 00:25:13.652 "zcopy": true, 00:25:13.652 "get_zone_info": false, 00:25:13.652 "zone_management": false, 00:25:13.652 "zone_append": false, 00:25:13.652 "compare": false, 00:25:13.652 "compare_and_write": false, 00:25:13.652 "abort": true, 00:25:13.652 "seek_hole": false, 00:25:13.652 "seek_data": false, 00:25:13.652 "copy": true, 00:25:13.652 "nvme_iov_md": false 00:25:13.652 }, 00:25:13.652 "memory_domains": [ 00:25:13.652 { 00:25:13.652 "dma_device_id": "system", 00:25:13.652 "dma_device_type": 1 00:25:13.652 }, 00:25:13.652 { 00:25:13.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.652 "dma_device_type": 2 00:25:13.652 } 00:25:13.652 ], 00:25:13.652 "driver_specific": {} 00:25:13.652 } 00:25:13.652 ] 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:25:13.652 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:13.653 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:13.653 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:13.653 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:13.653 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:13.653 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:13.653 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:13.653 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:13.653 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:13.653 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:13.653 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.653 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.653 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.910 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.910 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:13.910 "name": "Existed_Raid", 00:25:13.910 "uuid": "40a282a7-e6ad-4dc2-bf19-f84b1124eb92", 00:25:13.910 "strip_size_kb": 64, 00:25:13.910 "state": "online", 00:25:13.910 "raid_level": "raid0", 00:25:13.910 "superblock": false, 00:25:13.910 "num_base_bdevs": 3, 00:25:13.910 "num_base_bdevs_discovered": 3, 00:25:13.910 "num_base_bdevs_operational": 3, 00:25:13.910 "base_bdevs_list": [ 00:25:13.910 { 00:25:13.910 "name": "NewBaseBdev", 00:25:13.910 "uuid": "6958225e-84a2-48fd-928c-b528d21df480", 00:25:13.910 "is_configured": true, 00:25:13.910 "data_offset": 0, 00:25:13.910 "data_size": 65536 00:25:13.910 }, 00:25:13.910 { 00:25:13.910 "name": "BaseBdev2", 00:25:13.910 "uuid": "46f651c0-d10c-41d9-bca7-ecf38e66545b", 00:25:13.910 "is_configured": true, 00:25:13.910 "data_offset": 0, 00:25:13.910 "data_size": 65536 00:25:13.910 }, 00:25:13.910 { 00:25:13.910 "name": "BaseBdev3", 00:25:13.910 "uuid": "32432254-72bd-4e82-9fd9-46f4f59a3b50", 00:25:13.910 "is_configured": true, 00:25:13.910 "data_offset": 0, 00:25:13.910 "data_size": 65536 00:25:13.910 } 00:25:13.910 ] 00:25:13.910 }' 00:25:13.910 13:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:13.910 13:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.168 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:14.168 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:14.168 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:14.168 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:14.168 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:14.168 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:14.168 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:14.168 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:14.168 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.168 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.168 [2024-10-28 13:36:28.316176] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:14.426 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.426 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:14.426 "name": "Existed_Raid", 00:25:14.426 "aliases": [ 00:25:14.426 "40a282a7-e6ad-4dc2-bf19-f84b1124eb92" 00:25:14.426 ], 00:25:14.426 "product_name": "Raid Volume", 00:25:14.426 "block_size": 512, 00:25:14.426 "num_blocks": 196608, 00:25:14.426 "uuid": "40a282a7-e6ad-4dc2-bf19-f84b1124eb92", 00:25:14.426 "assigned_rate_limits": { 00:25:14.426 "rw_ios_per_sec": 0, 00:25:14.426 "rw_mbytes_per_sec": 0, 00:25:14.426 "r_mbytes_per_sec": 0, 00:25:14.426 "w_mbytes_per_sec": 0 00:25:14.426 }, 00:25:14.426 "claimed": false, 00:25:14.426 "zoned": false, 00:25:14.426 "supported_io_types": { 00:25:14.426 "read": true, 00:25:14.426 "write": true, 00:25:14.426 "unmap": true, 00:25:14.426 "flush": true, 00:25:14.426 "reset": true, 00:25:14.426 "nvme_admin": false, 00:25:14.426 "nvme_io": false, 00:25:14.426 "nvme_io_md": false, 00:25:14.426 "write_zeroes": true, 00:25:14.426 "zcopy": false, 00:25:14.426 "get_zone_info": false, 00:25:14.426 "zone_management": false, 00:25:14.426 "zone_append": false, 00:25:14.426 "compare": false, 00:25:14.426 "compare_and_write": false, 00:25:14.426 "abort": false, 00:25:14.426 "seek_hole": false, 00:25:14.426 "seek_data": false, 00:25:14.426 "copy": false, 00:25:14.426 "nvme_iov_md": false 00:25:14.426 }, 00:25:14.426 "memory_domains": [ 00:25:14.426 { 00:25:14.426 "dma_device_id": "system", 00:25:14.426 "dma_device_type": 1 00:25:14.426 }, 00:25:14.426 { 00:25:14.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:14.426 "dma_device_type": 2 00:25:14.426 }, 00:25:14.426 { 00:25:14.426 "dma_device_id": "system", 00:25:14.426 "dma_device_type": 1 00:25:14.426 }, 00:25:14.426 { 00:25:14.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:14.427 "dma_device_type": 2 00:25:14.427 }, 00:25:14.427 { 00:25:14.427 "dma_device_id": "system", 00:25:14.427 "dma_device_type": 1 00:25:14.427 }, 00:25:14.427 { 00:25:14.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:14.427 "dma_device_type": 2 00:25:14.427 } 00:25:14.427 ], 00:25:14.427 "driver_specific": { 00:25:14.427 "raid": { 00:25:14.427 "uuid": "40a282a7-e6ad-4dc2-bf19-f84b1124eb92", 00:25:14.427 "strip_size_kb": 64, 00:25:14.427 "state": "online", 00:25:14.427 "raid_level": "raid0", 00:25:14.427 "superblock": false, 00:25:14.427 "num_base_bdevs": 3, 00:25:14.427 "num_base_bdevs_discovered": 3, 00:25:14.427 "num_base_bdevs_operational": 3, 00:25:14.427 "base_bdevs_list": [ 00:25:14.427 { 00:25:14.427 "name": "NewBaseBdev", 00:25:14.427 "uuid": "6958225e-84a2-48fd-928c-b528d21df480", 00:25:14.427 "is_configured": true, 00:25:14.427 "data_offset": 0, 00:25:14.427 "data_size": 65536 00:25:14.427 }, 00:25:14.427 { 00:25:14.427 "name": "BaseBdev2", 00:25:14.427 "uuid": "46f651c0-d10c-41d9-bca7-ecf38e66545b", 00:25:14.427 "is_configured": true, 00:25:14.427 "data_offset": 0, 00:25:14.427 "data_size": 65536 00:25:14.427 }, 00:25:14.427 { 00:25:14.427 "name": "BaseBdev3", 00:25:14.427 "uuid": "32432254-72bd-4e82-9fd9-46f4f59a3b50", 00:25:14.427 "is_configured": true, 00:25:14.427 "data_offset": 0, 00:25:14.427 "data_size": 65536 00:25:14.427 } 00:25:14.427 ] 00:25:14.427 } 00:25:14.427 } 00:25:14.427 }' 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:14.427 BaseBdev2 00:25:14.427 BaseBdev3' 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.427 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:14.686 [2024-10-28 13:36:28.599801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:14.686 [2024-10-28 13:36:28.599889] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:14.686 [2024-10-28 13:36:28.600023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:14.686 [2024-10-28 13:36:28.600120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:14.686 [2024-10-28 13:36:28.600168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76667 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76667 ']' 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76667 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76667 00:25:14.686 killing process with pid 76667 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76667' 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76667 00:25:14.686 13:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76667 00:25:14.686 [2024-10-28 13:36:28.634091] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:14.686 [2024-10-28 13:36:28.689641] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:15.014 ************************************ 00:25:15.014 END TEST raid_state_function_test 00:25:15.014 ************************************ 00:25:15.014 13:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:25:15.014 00:25:15.014 real 0m10.795s 00:25:15.014 user 0m18.801s 00:25:15.014 sys 0m1.688s 00:25:15.014 13:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:15.014 13:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.014 13:36:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:25:15.015 13:36:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:15.015 13:36:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:15.015 13:36:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:15.015 ************************************ 00:25:15.015 START TEST raid_state_function_test_sb 00:25:15.015 ************************************ 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77294 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77294' 00:25:15.015 Process raid pid: 77294 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77294 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77294 ']' 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:15.015 13:36:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:15.015 [2024-10-28 13:36:29.163374] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:25:15.015 [2024-10-28 13:36:29.163551] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.273 [2024-10-28 13:36:29.329229] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:15.273 [2024-10-28 13:36:29.359353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.273 [2024-10-28 13:36:29.430133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.531 [2024-10-28 13:36:29.509350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:15.531 [2024-10-28 13:36:29.509412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.097 [2024-10-28 13:36:30.186127] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:16.097 [2024-10-28 13:36:30.186277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:16.097 [2024-10-28 13:36:30.186305] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:16.097 [2024-10-28 13:36:30.186322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:16.097 [2024-10-28 13:36:30.186346] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:16.097 [2024-10-28 13:36:30.186360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:16.097 "name": "Existed_Raid", 00:25:16.097 "uuid": "1d9ab628-17b1-45d7-8685-fed6dc57aff6", 00:25:16.097 "strip_size_kb": 64, 00:25:16.097 "state": "configuring", 00:25:16.097 "raid_level": "raid0", 00:25:16.097 "superblock": true, 00:25:16.097 "num_base_bdevs": 3, 00:25:16.097 "num_base_bdevs_discovered": 0, 00:25:16.097 "num_base_bdevs_operational": 3, 00:25:16.097 "base_bdevs_list": [ 00:25:16.097 { 00:25:16.097 "name": "BaseBdev1", 00:25:16.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.097 "is_configured": false, 00:25:16.097 "data_offset": 0, 00:25:16.097 "data_size": 0 00:25:16.097 }, 00:25:16.097 { 00:25:16.097 "name": "BaseBdev2", 00:25:16.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.097 "is_configured": false, 00:25:16.097 "data_offset": 0, 00:25:16.097 "data_size": 0 00:25:16.097 }, 00:25:16.097 { 00:25:16.097 "name": "BaseBdev3", 00:25:16.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.097 "is_configured": false, 00:25:16.097 "data_offset": 0, 00:25:16.097 "data_size": 0 00:25:16.097 } 00:25:16.097 ] 00:25:16.097 }' 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:16.097 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.664 [2024-10-28 13:36:30.734042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:16.664 [2024-10-28 13:36:30.734121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.664 [2024-10-28 13:36:30.742067] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:16.664 [2024-10-28 13:36:30.742129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:16.664 [2024-10-28 13:36:30.742185] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:16.664 [2024-10-28 13:36:30.742204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:16.664 [2024-10-28 13:36:30.742220] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:16.664 [2024-10-28 13:36:30.742235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.664 [2024-10-28 13:36:30.766250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:16.664 BaseBdev1 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:16.664 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.665 [ 00:25:16.665 { 00:25:16.665 "name": "BaseBdev1", 00:25:16.665 "aliases": [ 00:25:16.665 "52d9f9e6-5cd2-4f97-88c4-d8bc34c3acc5" 00:25:16.665 ], 00:25:16.665 "product_name": "Malloc disk", 00:25:16.665 "block_size": 512, 00:25:16.665 "num_blocks": 65536, 00:25:16.665 "uuid": "52d9f9e6-5cd2-4f97-88c4-d8bc34c3acc5", 00:25:16.665 "assigned_rate_limits": { 00:25:16.665 "rw_ios_per_sec": 0, 00:25:16.665 "rw_mbytes_per_sec": 0, 00:25:16.665 "r_mbytes_per_sec": 0, 00:25:16.665 "w_mbytes_per_sec": 0 00:25:16.665 }, 00:25:16.665 "claimed": true, 00:25:16.665 "claim_type": "exclusive_write", 00:25:16.665 "zoned": false, 00:25:16.665 "supported_io_types": { 00:25:16.665 "read": true, 00:25:16.665 "write": true, 00:25:16.665 "unmap": true, 00:25:16.665 "flush": true, 00:25:16.665 "reset": true, 00:25:16.665 "nvme_admin": false, 00:25:16.665 "nvme_io": false, 00:25:16.665 "nvme_io_md": false, 00:25:16.665 "write_zeroes": true, 00:25:16.665 "zcopy": true, 00:25:16.665 "get_zone_info": false, 00:25:16.665 "zone_management": false, 00:25:16.665 "zone_append": false, 00:25:16.665 "compare": false, 00:25:16.665 "compare_and_write": false, 00:25:16.665 "abort": true, 00:25:16.665 "seek_hole": false, 00:25:16.665 "seek_data": false, 00:25:16.665 "copy": true, 00:25:16.665 "nvme_iov_md": false 00:25:16.665 }, 00:25:16.665 "memory_domains": [ 00:25:16.665 { 00:25:16.665 "dma_device_id": "system", 00:25:16.665 "dma_device_type": 1 00:25:16.665 }, 00:25:16.665 { 00:25:16.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:16.665 "dma_device_type": 2 00:25:16.665 } 00:25:16.665 ], 00:25:16.665 "driver_specific": {} 00:25:16.665 } 00:25:16.665 ] 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:16.665 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.922 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:16.922 "name": "Existed_Raid", 00:25:16.922 "uuid": "9060e94f-6632-4d78-a313-524e3d21f0f8", 00:25:16.922 "strip_size_kb": 64, 00:25:16.922 "state": "configuring", 00:25:16.922 "raid_level": "raid0", 00:25:16.922 "superblock": true, 00:25:16.922 "num_base_bdevs": 3, 00:25:16.922 "num_base_bdevs_discovered": 1, 00:25:16.922 "num_base_bdevs_operational": 3, 00:25:16.922 "base_bdevs_list": [ 00:25:16.922 { 00:25:16.922 "name": "BaseBdev1", 00:25:16.922 "uuid": "52d9f9e6-5cd2-4f97-88c4-d8bc34c3acc5", 00:25:16.922 "is_configured": true, 00:25:16.922 "data_offset": 2048, 00:25:16.922 "data_size": 63488 00:25:16.922 }, 00:25:16.922 { 00:25:16.922 "name": "BaseBdev2", 00:25:16.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.922 "is_configured": false, 00:25:16.922 "data_offset": 0, 00:25:16.922 "data_size": 0 00:25:16.922 }, 00:25:16.922 { 00:25:16.922 "name": "BaseBdev3", 00:25:16.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.922 "is_configured": false, 00:25:16.922 "data_offset": 0, 00:25:16.922 "data_size": 0 00:25:16.922 } 00:25:16.922 ] 00:25:16.922 }' 00:25:16.922 13:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:16.922 13:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.179 [2024-10-28 13:36:31.290483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:17.179 [2024-10-28 13:36:31.290589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.179 [2024-10-28 13:36:31.298503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:17.179 [2024-10-28 13:36:31.301400] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:17.179 [2024-10-28 13:36:31.301784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:17.179 [2024-10-28 13:36:31.301825] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:17.179 [2024-10-28 13:36:31.301846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:17.179 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.437 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:17.437 "name": "Existed_Raid", 00:25:17.437 "uuid": "124a8d1d-f6bf-4f57-b8fc-2514fd20ed5b", 00:25:17.437 "strip_size_kb": 64, 00:25:17.437 "state": "configuring", 00:25:17.437 "raid_level": "raid0", 00:25:17.437 "superblock": true, 00:25:17.437 "num_base_bdevs": 3, 00:25:17.437 "num_base_bdevs_discovered": 1, 00:25:17.437 "num_base_bdevs_operational": 3, 00:25:17.437 "base_bdevs_list": [ 00:25:17.437 { 00:25:17.437 "name": "BaseBdev1", 00:25:17.437 "uuid": "52d9f9e6-5cd2-4f97-88c4-d8bc34c3acc5", 00:25:17.437 "is_configured": true, 00:25:17.437 "data_offset": 2048, 00:25:17.437 "data_size": 63488 00:25:17.437 }, 00:25:17.437 { 00:25:17.437 "name": "BaseBdev2", 00:25:17.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.437 "is_configured": false, 00:25:17.437 "data_offset": 0, 00:25:17.437 "data_size": 0 00:25:17.437 }, 00:25:17.437 { 00:25:17.437 "name": "BaseBdev3", 00:25:17.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.437 "is_configured": false, 00:25:17.437 "data_offset": 0, 00:25:17.437 "data_size": 0 00:25:17.437 } 00:25:17.437 ] 00:25:17.437 }' 00:25:17.437 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:17.437 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.695 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:17.695 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.695 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.951 [2024-10-28 13:36:31.864717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:17.951 BaseBdev2 00:25:17.951 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.951 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:17.951 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:17.951 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:17.951 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:17.951 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:17.951 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:17.951 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:17.951 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.951 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.951 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.951 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:17.951 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.951 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.951 [ 00:25:17.951 { 00:25:17.951 "name": "BaseBdev2", 00:25:17.951 "aliases": [ 00:25:17.951 "db8f1b79-6ce6-43e8-b03b-a77a2cf23755" 00:25:17.951 ], 00:25:17.951 "product_name": "Malloc disk", 00:25:17.951 "block_size": 512, 00:25:17.951 "num_blocks": 65536, 00:25:17.951 "uuid": "db8f1b79-6ce6-43e8-b03b-a77a2cf23755", 00:25:17.951 "assigned_rate_limits": { 00:25:17.951 "rw_ios_per_sec": 0, 00:25:17.951 "rw_mbytes_per_sec": 0, 00:25:17.951 "r_mbytes_per_sec": 0, 00:25:17.951 "w_mbytes_per_sec": 0 00:25:17.951 }, 00:25:17.951 "claimed": true, 00:25:17.951 "claim_type": "exclusive_write", 00:25:17.951 "zoned": false, 00:25:17.951 "supported_io_types": { 00:25:17.951 "read": true, 00:25:17.951 "write": true, 00:25:17.951 "unmap": true, 00:25:17.951 "flush": true, 00:25:17.951 "reset": true, 00:25:17.951 "nvme_admin": false, 00:25:17.951 "nvme_io": false, 00:25:17.951 "nvme_io_md": false, 00:25:17.951 "write_zeroes": true, 00:25:17.951 "zcopy": true, 00:25:17.951 "get_zone_info": false, 00:25:17.951 "zone_management": false, 00:25:17.951 "zone_append": false, 00:25:17.951 "compare": false, 00:25:17.951 "compare_and_write": false, 00:25:17.951 "abort": true, 00:25:17.951 "seek_hole": false, 00:25:17.952 "seek_data": false, 00:25:17.952 "copy": true, 00:25:17.952 "nvme_iov_md": false 00:25:17.952 }, 00:25:17.952 "memory_domains": [ 00:25:17.952 { 00:25:17.952 "dma_device_id": "system", 00:25:17.952 "dma_device_type": 1 00:25:17.952 }, 00:25:17.952 { 00:25:17.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:17.952 "dma_device_type": 2 00:25:17.952 } 00:25:17.952 ], 00:25:17.952 "driver_specific": {} 00:25:17.952 } 00:25:17.952 ] 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:17.952 "name": "Existed_Raid", 00:25:17.952 "uuid": "124a8d1d-f6bf-4f57-b8fc-2514fd20ed5b", 00:25:17.952 "strip_size_kb": 64, 00:25:17.952 "state": "configuring", 00:25:17.952 "raid_level": "raid0", 00:25:17.952 "superblock": true, 00:25:17.952 "num_base_bdevs": 3, 00:25:17.952 "num_base_bdevs_discovered": 2, 00:25:17.952 "num_base_bdevs_operational": 3, 00:25:17.952 "base_bdevs_list": [ 00:25:17.952 { 00:25:17.952 "name": "BaseBdev1", 00:25:17.952 "uuid": "52d9f9e6-5cd2-4f97-88c4-d8bc34c3acc5", 00:25:17.952 "is_configured": true, 00:25:17.952 "data_offset": 2048, 00:25:17.952 "data_size": 63488 00:25:17.952 }, 00:25:17.952 { 00:25:17.952 "name": "BaseBdev2", 00:25:17.952 "uuid": "db8f1b79-6ce6-43e8-b03b-a77a2cf23755", 00:25:17.952 "is_configured": true, 00:25:17.952 "data_offset": 2048, 00:25:17.952 "data_size": 63488 00:25:17.952 }, 00:25:17.952 { 00:25:17.952 "name": "BaseBdev3", 00:25:17.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.952 "is_configured": false, 00:25:17.952 "data_offset": 0, 00:25:17.952 "data_size": 0 00:25:17.952 } 00:25:17.952 ] 00:25:17.952 }' 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:17.952 13:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.516 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.517 [2024-10-28 13:36:32.452756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:18.517 [2024-10-28 13:36:32.453079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:18.517 [2024-10-28 13:36:32.453102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:18.517 BaseBdev3 00:25:18.517 [2024-10-28 13:36:32.453605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:18.517 [2024-10-28 13:36:32.453810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:18.517 [2024-10-28 13:36:32.453852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:25:18.517 [2024-10-28 13:36:32.454039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.517 [ 00:25:18.517 { 00:25:18.517 "name": "BaseBdev3", 00:25:18.517 "aliases": [ 00:25:18.517 "552990be-78e3-445e-844a-4472d8576715" 00:25:18.517 ], 00:25:18.517 "product_name": "Malloc disk", 00:25:18.517 "block_size": 512, 00:25:18.517 "num_blocks": 65536, 00:25:18.517 "uuid": "552990be-78e3-445e-844a-4472d8576715", 00:25:18.517 "assigned_rate_limits": { 00:25:18.517 "rw_ios_per_sec": 0, 00:25:18.517 "rw_mbytes_per_sec": 0, 00:25:18.517 "r_mbytes_per_sec": 0, 00:25:18.517 "w_mbytes_per_sec": 0 00:25:18.517 }, 00:25:18.517 "claimed": true, 00:25:18.517 "claim_type": "exclusive_write", 00:25:18.517 "zoned": false, 00:25:18.517 "supported_io_types": { 00:25:18.517 "read": true, 00:25:18.517 "write": true, 00:25:18.517 "unmap": true, 00:25:18.517 "flush": true, 00:25:18.517 "reset": true, 00:25:18.517 "nvme_admin": false, 00:25:18.517 "nvme_io": false, 00:25:18.517 "nvme_io_md": false, 00:25:18.517 "write_zeroes": true, 00:25:18.517 "zcopy": true, 00:25:18.517 "get_zone_info": false, 00:25:18.517 "zone_management": false, 00:25:18.517 "zone_append": false, 00:25:18.517 "compare": false, 00:25:18.517 "compare_and_write": false, 00:25:18.517 "abort": true, 00:25:18.517 "seek_hole": false, 00:25:18.517 "seek_data": false, 00:25:18.517 "copy": true, 00:25:18.517 "nvme_iov_md": false 00:25:18.517 }, 00:25:18.517 "memory_domains": [ 00:25:18.517 { 00:25:18.517 "dma_device_id": "system", 00:25:18.517 "dma_device_type": 1 00:25:18.517 }, 00:25:18.517 { 00:25:18.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.517 "dma_device_type": 2 00:25:18.517 } 00:25:18.517 ], 00:25:18.517 "driver_specific": {} 00:25:18.517 } 00:25:18.517 ] 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:18.517 "name": "Existed_Raid", 00:25:18.517 "uuid": "124a8d1d-f6bf-4f57-b8fc-2514fd20ed5b", 00:25:18.517 "strip_size_kb": 64, 00:25:18.517 "state": "online", 00:25:18.517 "raid_level": "raid0", 00:25:18.517 "superblock": true, 00:25:18.517 "num_base_bdevs": 3, 00:25:18.517 "num_base_bdevs_discovered": 3, 00:25:18.517 "num_base_bdevs_operational": 3, 00:25:18.517 "base_bdevs_list": [ 00:25:18.517 { 00:25:18.517 "name": "BaseBdev1", 00:25:18.517 "uuid": "52d9f9e6-5cd2-4f97-88c4-d8bc34c3acc5", 00:25:18.517 "is_configured": true, 00:25:18.517 "data_offset": 2048, 00:25:18.517 "data_size": 63488 00:25:18.517 }, 00:25:18.517 { 00:25:18.517 "name": "BaseBdev2", 00:25:18.517 "uuid": "db8f1b79-6ce6-43e8-b03b-a77a2cf23755", 00:25:18.517 "is_configured": true, 00:25:18.517 "data_offset": 2048, 00:25:18.517 "data_size": 63488 00:25:18.517 }, 00:25:18.517 { 00:25:18.517 "name": "BaseBdev3", 00:25:18.517 "uuid": "552990be-78e3-445e-844a-4472d8576715", 00:25:18.517 "is_configured": true, 00:25:18.517 "data_offset": 2048, 00:25:18.517 "data_size": 63488 00:25:18.517 } 00:25:18.517 ] 00:25:18.517 }' 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:18.517 13:36:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.082 [2024-10-28 13:36:33.025483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:19.082 "name": "Existed_Raid", 00:25:19.082 "aliases": [ 00:25:19.082 "124a8d1d-f6bf-4f57-b8fc-2514fd20ed5b" 00:25:19.082 ], 00:25:19.082 "product_name": "Raid Volume", 00:25:19.082 "block_size": 512, 00:25:19.082 "num_blocks": 190464, 00:25:19.082 "uuid": "124a8d1d-f6bf-4f57-b8fc-2514fd20ed5b", 00:25:19.082 "assigned_rate_limits": { 00:25:19.082 "rw_ios_per_sec": 0, 00:25:19.082 "rw_mbytes_per_sec": 0, 00:25:19.082 "r_mbytes_per_sec": 0, 00:25:19.082 "w_mbytes_per_sec": 0 00:25:19.082 }, 00:25:19.082 "claimed": false, 00:25:19.082 "zoned": false, 00:25:19.082 "supported_io_types": { 00:25:19.082 "read": true, 00:25:19.082 "write": true, 00:25:19.082 "unmap": true, 00:25:19.082 "flush": true, 00:25:19.082 "reset": true, 00:25:19.082 "nvme_admin": false, 00:25:19.082 "nvme_io": false, 00:25:19.082 "nvme_io_md": false, 00:25:19.082 "write_zeroes": true, 00:25:19.082 "zcopy": false, 00:25:19.082 "get_zone_info": false, 00:25:19.082 "zone_management": false, 00:25:19.082 "zone_append": false, 00:25:19.082 "compare": false, 00:25:19.082 "compare_and_write": false, 00:25:19.082 "abort": false, 00:25:19.082 "seek_hole": false, 00:25:19.082 "seek_data": false, 00:25:19.082 "copy": false, 00:25:19.082 "nvme_iov_md": false 00:25:19.082 }, 00:25:19.082 "memory_domains": [ 00:25:19.082 { 00:25:19.082 "dma_device_id": "system", 00:25:19.082 "dma_device_type": 1 00:25:19.082 }, 00:25:19.082 { 00:25:19.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.082 "dma_device_type": 2 00:25:19.082 }, 00:25:19.082 { 00:25:19.082 "dma_device_id": "system", 00:25:19.082 "dma_device_type": 1 00:25:19.082 }, 00:25:19.082 { 00:25:19.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.082 "dma_device_type": 2 00:25:19.082 }, 00:25:19.082 { 00:25:19.082 "dma_device_id": "system", 00:25:19.082 "dma_device_type": 1 00:25:19.082 }, 00:25:19.082 { 00:25:19.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.082 "dma_device_type": 2 00:25:19.082 } 00:25:19.082 ], 00:25:19.082 "driver_specific": { 00:25:19.082 "raid": { 00:25:19.082 "uuid": "124a8d1d-f6bf-4f57-b8fc-2514fd20ed5b", 00:25:19.082 "strip_size_kb": 64, 00:25:19.082 "state": "online", 00:25:19.082 "raid_level": "raid0", 00:25:19.082 "superblock": true, 00:25:19.082 "num_base_bdevs": 3, 00:25:19.082 "num_base_bdevs_discovered": 3, 00:25:19.082 "num_base_bdevs_operational": 3, 00:25:19.082 "base_bdevs_list": [ 00:25:19.082 { 00:25:19.082 "name": "BaseBdev1", 00:25:19.082 "uuid": "52d9f9e6-5cd2-4f97-88c4-d8bc34c3acc5", 00:25:19.082 "is_configured": true, 00:25:19.082 "data_offset": 2048, 00:25:19.082 "data_size": 63488 00:25:19.082 }, 00:25:19.082 { 00:25:19.082 "name": "BaseBdev2", 00:25:19.082 "uuid": "db8f1b79-6ce6-43e8-b03b-a77a2cf23755", 00:25:19.082 "is_configured": true, 00:25:19.082 "data_offset": 2048, 00:25:19.082 "data_size": 63488 00:25:19.082 }, 00:25:19.082 { 00:25:19.082 "name": "BaseBdev3", 00:25:19.082 "uuid": "552990be-78e3-445e-844a-4472d8576715", 00:25:19.082 "is_configured": true, 00:25:19.082 "data_offset": 2048, 00:25:19.082 "data_size": 63488 00:25:19.082 } 00:25:19.082 ] 00:25:19.082 } 00:25:19.082 } 00:25:19.082 }' 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:19.082 BaseBdev2 00:25:19.082 BaseBdev3' 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.082 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.383 [2024-10-28 13:36:33.365214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:19.383 [2024-10-28 13:36:33.365280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:19.383 [2024-10-28 13:36:33.365372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.383 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:19.383 "name": "Existed_Raid", 00:25:19.383 "uuid": "124a8d1d-f6bf-4f57-b8fc-2514fd20ed5b", 00:25:19.383 "strip_size_kb": 64, 00:25:19.383 "state": "offline", 00:25:19.383 "raid_level": "raid0", 00:25:19.383 "superblock": true, 00:25:19.383 "num_base_bdevs": 3, 00:25:19.383 "num_base_bdevs_discovered": 2, 00:25:19.383 "num_base_bdevs_operational": 2, 00:25:19.383 "base_bdevs_list": [ 00:25:19.383 { 00:25:19.384 "name": null, 00:25:19.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.384 "is_configured": false, 00:25:19.384 "data_offset": 0, 00:25:19.384 "data_size": 63488 00:25:19.384 }, 00:25:19.384 { 00:25:19.384 "name": "BaseBdev2", 00:25:19.384 "uuid": "db8f1b79-6ce6-43e8-b03b-a77a2cf23755", 00:25:19.384 "is_configured": true, 00:25:19.384 "data_offset": 2048, 00:25:19.384 "data_size": 63488 00:25:19.384 }, 00:25:19.384 { 00:25:19.384 "name": "BaseBdev3", 00:25:19.384 "uuid": "552990be-78e3-445e-844a-4472d8576715", 00:25:19.384 "is_configured": true, 00:25:19.384 "data_offset": 2048, 00:25:19.384 "data_size": 63488 00:25:19.384 } 00:25:19.384 ] 00:25:19.384 }' 00:25:19.384 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:19.384 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.953 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:19.953 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:19.953 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.953 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.953 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.953 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:19.953 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.953 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:19.953 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:19.953 13:36:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:19.953 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.953 13:36:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.953 [2024-10-28 13:36:33.987637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.953 [2024-10-28 13:36:34.066211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:19.953 [2024-10-28 13:36:34.066307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:19.953 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.218 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:20.218 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:20.218 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:25:20.218 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:20.218 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:20.218 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:20.218 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.218 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.218 BaseBdev2 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.219 [ 00:25:20.219 { 00:25:20.219 "name": "BaseBdev2", 00:25:20.219 "aliases": [ 00:25:20.219 "e1b8e2e9-a81d-4187-8281-f8752dbd263e" 00:25:20.219 ], 00:25:20.219 "product_name": "Malloc disk", 00:25:20.219 "block_size": 512, 00:25:20.219 "num_blocks": 65536, 00:25:20.219 "uuid": "e1b8e2e9-a81d-4187-8281-f8752dbd263e", 00:25:20.219 "assigned_rate_limits": { 00:25:20.219 "rw_ios_per_sec": 0, 00:25:20.219 "rw_mbytes_per_sec": 0, 00:25:20.219 "r_mbytes_per_sec": 0, 00:25:20.219 "w_mbytes_per_sec": 0 00:25:20.219 }, 00:25:20.219 "claimed": false, 00:25:20.219 "zoned": false, 00:25:20.219 "supported_io_types": { 00:25:20.219 "read": true, 00:25:20.219 "write": true, 00:25:20.219 "unmap": true, 00:25:20.219 "flush": true, 00:25:20.219 "reset": true, 00:25:20.219 "nvme_admin": false, 00:25:20.219 "nvme_io": false, 00:25:20.219 "nvme_io_md": false, 00:25:20.219 "write_zeroes": true, 00:25:20.219 "zcopy": true, 00:25:20.219 "get_zone_info": false, 00:25:20.219 "zone_management": false, 00:25:20.219 "zone_append": false, 00:25:20.219 "compare": false, 00:25:20.219 "compare_and_write": false, 00:25:20.219 "abort": true, 00:25:20.219 "seek_hole": false, 00:25:20.219 "seek_data": false, 00:25:20.219 "copy": true, 00:25:20.219 "nvme_iov_md": false 00:25:20.219 }, 00:25:20.219 "memory_domains": [ 00:25:20.219 { 00:25:20.219 "dma_device_id": "system", 00:25:20.219 "dma_device_type": 1 00:25:20.219 }, 00:25:20.219 { 00:25:20.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.219 "dma_device_type": 2 00:25:20.219 } 00:25:20.219 ], 00:25:20.219 "driver_specific": {} 00:25:20.219 } 00:25:20.219 ] 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.219 BaseBdev3 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.219 [ 00:25:20.219 { 00:25:20.219 "name": "BaseBdev3", 00:25:20.219 "aliases": [ 00:25:20.219 "26404c00-c08f-4a6c-a52a-3d1d49806f23" 00:25:20.219 ], 00:25:20.219 "product_name": "Malloc disk", 00:25:20.219 "block_size": 512, 00:25:20.219 "num_blocks": 65536, 00:25:20.219 "uuid": "26404c00-c08f-4a6c-a52a-3d1d49806f23", 00:25:20.219 "assigned_rate_limits": { 00:25:20.219 "rw_ios_per_sec": 0, 00:25:20.219 "rw_mbytes_per_sec": 0, 00:25:20.219 "r_mbytes_per_sec": 0, 00:25:20.219 "w_mbytes_per_sec": 0 00:25:20.219 }, 00:25:20.219 "claimed": false, 00:25:20.219 "zoned": false, 00:25:20.219 "supported_io_types": { 00:25:20.219 "read": true, 00:25:20.219 "write": true, 00:25:20.219 "unmap": true, 00:25:20.219 "flush": true, 00:25:20.219 "reset": true, 00:25:20.219 "nvme_admin": false, 00:25:20.219 "nvme_io": false, 00:25:20.219 "nvme_io_md": false, 00:25:20.219 "write_zeroes": true, 00:25:20.219 "zcopy": true, 00:25:20.219 "get_zone_info": false, 00:25:20.219 "zone_management": false, 00:25:20.219 "zone_append": false, 00:25:20.219 "compare": false, 00:25:20.219 "compare_and_write": false, 00:25:20.219 "abort": true, 00:25:20.219 "seek_hole": false, 00:25:20.219 "seek_data": false, 00:25:20.219 "copy": true, 00:25:20.219 "nvme_iov_md": false 00:25:20.219 }, 00:25:20.219 "memory_domains": [ 00:25:20.219 { 00:25:20.219 "dma_device_id": "system", 00:25:20.219 "dma_device_type": 1 00:25:20.219 }, 00:25:20.219 { 00:25:20.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.219 "dma_device_type": 2 00:25:20.219 } 00:25:20.219 ], 00:25:20.219 "driver_specific": {} 00:25:20.219 } 00:25:20.219 ] 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.219 [2024-10-28 13:36:34.252818] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:20.219 [2024-10-28 13:36:34.252922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:20.219 [2024-10-28 13:36:34.252957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:20.219 [2024-10-28 13:36:34.255918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.219 "name": "Existed_Raid", 00:25:20.219 "uuid": "4dd15350-f53d-42c6-9ba3-948348e92c14", 00:25:20.219 "strip_size_kb": 64, 00:25:20.219 "state": "configuring", 00:25:20.219 "raid_level": "raid0", 00:25:20.219 "superblock": true, 00:25:20.219 "num_base_bdevs": 3, 00:25:20.219 "num_base_bdevs_discovered": 2, 00:25:20.219 "num_base_bdevs_operational": 3, 00:25:20.219 "base_bdevs_list": [ 00:25:20.219 { 00:25:20.219 "name": "BaseBdev1", 00:25:20.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.219 "is_configured": false, 00:25:20.219 "data_offset": 0, 00:25:20.219 "data_size": 0 00:25:20.219 }, 00:25:20.219 { 00:25:20.219 "name": "BaseBdev2", 00:25:20.219 "uuid": "e1b8e2e9-a81d-4187-8281-f8752dbd263e", 00:25:20.219 "is_configured": true, 00:25:20.219 "data_offset": 2048, 00:25:20.219 "data_size": 63488 00:25:20.219 }, 00:25:20.219 { 00:25:20.219 "name": "BaseBdev3", 00:25:20.219 "uuid": "26404c00-c08f-4a6c-a52a-3d1d49806f23", 00:25:20.219 "is_configured": true, 00:25:20.219 "data_offset": 2048, 00:25:20.219 "data_size": 63488 00:25:20.219 } 00:25:20.219 ] 00:25:20.219 }' 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.219 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.786 [2024-10-28 13:36:34.761182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:20.786 "name": "Existed_Raid", 00:25:20.786 "uuid": "4dd15350-f53d-42c6-9ba3-948348e92c14", 00:25:20.786 "strip_size_kb": 64, 00:25:20.786 "state": "configuring", 00:25:20.786 "raid_level": "raid0", 00:25:20.786 "superblock": true, 00:25:20.786 "num_base_bdevs": 3, 00:25:20.786 "num_base_bdevs_discovered": 1, 00:25:20.786 "num_base_bdevs_operational": 3, 00:25:20.786 "base_bdevs_list": [ 00:25:20.786 { 00:25:20.786 "name": "BaseBdev1", 00:25:20.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.786 "is_configured": false, 00:25:20.786 "data_offset": 0, 00:25:20.786 "data_size": 0 00:25:20.786 }, 00:25:20.786 { 00:25:20.786 "name": null, 00:25:20.786 "uuid": "e1b8e2e9-a81d-4187-8281-f8752dbd263e", 00:25:20.786 "is_configured": false, 00:25:20.786 "data_offset": 0, 00:25:20.786 "data_size": 63488 00:25:20.786 }, 00:25:20.786 { 00:25:20.786 "name": "BaseBdev3", 00:25:20.786 "uuid": "26404c00-c08f-4a6c-a52a-3d1d49806f23", 00:25:20.786 "is_configured": true, 00:25:20.786 "data_offset": 2048, 00:25:20.786 "data_size": 63488 00:25:20.786 } 00:25:20.786 ] 00:25:20.786 }' 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:20.786 13:36:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.352 [2024-10-28 13:36:35.367198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:21.352 BaseBdev1 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.352 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.352 [ 00:25:21.352 { 00:25:21.352 "name": "BaseBdev1", 00:25:21.352 "aliases": [ 00:25:21.352 "40baa209-da5f-41e7-ad24-762ba1b0204e" 00:25:21.352 ], 00:25:21.352 "product_name": "Malloc disk", 00:25:21.352 "block_size": 512, 00:25:21.352 "num_blocks": 65536, 00:25:21.352 "uuid": "40baa209-da5f-41e7-ad24-762ba1b0204e", 00:25:21.352 "assigned_rate_limits": { 00:25:21.352 "rw_ios_per_sec": 0, 00:25:21.352 "rw_mbytes_per_sec": 0, 00:25:21.352 "r_mbytes_per_sec": 0, 00:25:21.352 "w_mbytes_per_sec": 0 00:25:21.352 }, 00:25:21.352 "claimed": true, 00:25:21.352 "claim_type": "exclusive_write", 00:25:21.352 "zoned": false, 00:25:21.352 "supported_io_types": { 00:25:21.352 "read": true, 00:25:21.352 "write": true, 00:25:21.352 "unmap": true, 00:25:21.352 "flush": true, 00:25:21.352 "reset": true, 00:25:21.352 "nvme_admin": false, 00:25:21.352 "nvme_io": false, 00:25:21.352 "nvme_io_md": false, 00:25:21.352 "write_zeroes": true, 00:25:21.352 "zcopy": true, 00:25:21.352 "get_zone_info": false, 00:25:21.352 "zone_management": false, 00:25:21.352 "zone_append": false, 00:25:21.352 "compare": false, 00:25:21.352 "compare_and_write": false, 00:25:21.352 "abort": true, 00:25:21.352 "seek_hole": false, 00:25:21.352 "seek_data": false, 00:25:21.352 "copy": true, 00:25:21.352 "nvme_iov_md": false 00:25:21.352 }, 00:25:21.352 "memory_domains": [ 00:25:21.352 { 00:25:21.352 "dma_device_id": "system", 00:25:21.352 "dma_device_type": 1 00:25:21.352 }, 00:25:21.352 { 00:25:21.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.352 "dma_device_type": 2 00:25:21.352 } 00:25:21.352 ], 00:25:21.352 "driver_specific": {} 00:25:21.352 } 00:25:21.353 ] 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.353 "name": "Existed_Raid", 00:25:21.353 "uuid": "4dd15350-f53d-42c6-9ba3-948348e92c14", 00:25:21.353 "strip_size_kb": 64, 00:25:21.353 "state": "configuring", 00:25:21.353 "raid_level": "raid0", 00:25:21.353 "superblock": true, 00:25:21.353 "num_base_bdevs": 3, 00:25:21.353 "num_base_bdevs_discovered": 2, 00:25:21.353 "num_base_bdevs_operational": 3, 00:25:21.353 "base_bdevs_list": [ 00:25:21.353 { 00:25:21.353 "name": "BaseBdev1", 00:25:21.353 "uuid": "40baa209-da5f-41e7-ad24-762ba1b0204e", 00:25:21.353 "is_configured": true, 00:25:21.353 "data_offset": 2048, 00:25:21.353 "data_size": 63488 00:25:21.353 }, 00:25:21.353 { 00:25:21.353 "name": null, 00:25:21.353 "uuid": "e1b8e2e9-a81d-4187-8281-f8752dbd263e", 00:25:21.353 "is_configured": false, 00:25:21.353 "data_offset": 0, 00:25:21.353 "data_size": 63488 00:25:21.353 }, 00:25:21.353 { 00:25:21.353 "name": "BaseBdev3", 00:25:21.353 "uuid": "26404c00-c08f-4a6c-a52a-3d1d49806f23", 00:25:21.353 "is_configured": true, 00:25:21.353 "data_offset": 2048, 00:25:21.353 "data_size": 63488 00:25:21.353 } 00:25:21.353 ] 00:25:21.353 }' 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.353 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.919 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.919 13:36:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:21.919 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.919 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.919 13:36:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.919 [2024-10-28 13:36:36.015556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.919 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.177 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:22.177 "name": "Existed_Raid", 00:25:22.177 "uuid": "4dd15350-f53d-42c6-9ba3-948348e92c14", 00:25:22.177 "strip_size_kb": 64, 00:25:22.177 "state": "configuring", 00:25:22.177 "raid_level": "raid0", 00:25:22.177 "superblock": true, 00:25:22.177 "num_base_bdevs": 3, 00:25:22.177 "num_base_bdevs_discovered": 1, 00:25:22.177 "num_base_bdevs_operational": 3, 00:25:22.177 "base_bdevs_list": [ 00:25:22.177 { 00:25:22.177 "name": "BaseBdev1", 00:25:22.177 "uuid": "40baa209-da5f-41e7-ad24-762ba1b0204e", 00:25:22.177 "is_configured": true, 00:25:22.177 "data_offset": 2048, 00:25:22.177 "data_size": 63488 00:25:22.177 }, 00:25:22.177 { 00:25:22.177 "name": null, 00:25:22.177 "uuid": "e1b8e2e9-a81d-4187-8281-f8752dbd263e", 00:25:22.177 "is_configured": false, 00:25:22.177 "data_offset": 0, 00:25:22.177 "data_size": 63488 00:25:22.177 }, 00:25:22.177 { 00:25:22.177 "name": null, 00:25:22.177 "uuid": "26404c00-c08f-4a6c-a52a-3d1d49806f23", 00:25:22.177 "is_configured": false, 00:25:22.177 "data_offset": 0, 00:25:22.177 "data_size": 63488 00:25:22.177 } 00:25:22.177 ] 00:25:22.177 }' 00:25:22.177 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:22.177 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.435 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.435 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:22.435 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.435 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.435 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.698 [2024-10-28 13:36:36.635817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:22.698 "name": "Existed_Raid", 00:25:22.698 "uuid": "4dd15350-f53d-42c6-9ba3-948348e92c14", 00:25:22.698 "strip_size_kb": 64, 00:25:22.698 "state": "configuring", 00:25:22.698 "raid_level": "raid0", 00:25:22.698 "superblock": true, 00:25:22.698 "num_base_bdevs": 3, 00:25:22.698 "num_base_bdevs_discovered": 2, 00:25:22.698 "num_base_bdevs_operational": 3, 00:25:22.698 "base_bdevs_list": [ 00:25:22.698 { 00:25:22.698 "name": "BaseBdev1", 00:25:22.698 "uuid": "40baa209-da5f-41e7-ad24-762ba1b0204e", 00:25:22.698 "is_configured": true, 00:25:22.698 "data_offset": 2048, 00:25:22.698 "data_size": 63488 00:25:22.698 }, 00:25:22.698 { 00:25:22.698 "name": null, 00:25:22.698 "uuid": "e1b8e2e9-a81d-4187-8281-f8752dbd263e", 00:25:22.698 "is_configured": false, 00:25:22.698 "data_offset": 0, 00:25:22.698 "data_size": 63488 00:25:22.698 }, 00:25:22.698 { 00:25:22.698 "name": "BaseBdev3", 00:25:22.698 "uuid": "26404c00-c08f-4a6c-a52a-3d1d49806f23", 00:25:22.698 "is_configured": true, 00:25:22.698 "data_offset": 2048, 00:25:22.698 "data_size": 63488 00:25:22.698 } 00:25:22.698 ] 00:25:22.698 }' 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:22.698 13:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.266 [2024-10-28 13:36:37.235995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:23.266 "name": "Existed_Raid", 00:25:23.266 "uuid": "4dd15350-f53d-42c6-9ba3-948348e92c14", 00:25:23.266 "strip_size_kb": 64, 00:25:23.266 "state": "configuring", 00:25:23.266 "raid_level": "raid0", 00:25:23.266 "superblock": true, 00:25:23.266 "num_base_bdevs": 3, 00:25:23.266 "num_base_bdevs_discovered": 1, 00:25:23.266 "num_base_bdevs_operational": 3, 00:25:23.266 "base_bdevs_list": [ 00:25:23.266 { 00:25:23.266 "name": null, 00:25:23.266 "uuid": "40baa209-da5f-41e7-ad24-762ba1b0204e", 00:25:23.266 "is_configured": false, 00:25:23.266 "data_offset": 0, 00:25:23.266 "data_size": 63488 00:25:23.266 }, 00:25:23.266 { 00:25:23.266 "name": null, 00:25:23.266 "uuid": "e1b8e2e9-a81d-4187-8281-f8752dbd263e", 00:25:23.266 "is_configured": false, 00:25:23.266 "data_offset": 0, 00:25:23.266 "data_size": 63488 00:25:23.266 }, 00:25:23.266 { 00:25:23.266 "name": "BaseBdev3", 00:25:23.266 "uuid": "26404c00-c08f-4a6c-a52a-3d1d49806f23", 00:25:23.266 "is_configured": true, 00:25:23.266 "data_offset": 2048, 00:25:23.266 "data_size": 63488 00:25:23.266 } 00:25:23.266 ] 00:25:23.266 }' 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:23.266 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.834 [2024-10-28 13:36:37.835284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:23.834 "name": "Existed_Raid", 00:25:23.834 "uuid": "4dd15350-f53d-42c6-9ba3-948348e92c14", 00:25:23.834 "strip_size_kb": 64, 00:25:23.834 "state": "configuring", 00:25:23.834 "raid_level": "raid0", 00:25:23.834 "superblock": true, 00:25:23.834 "num_base_bdevs": 3, 00:25:23.834 "num_base_bdevs_discovered": 2, 00:25:23.834 "num_base_bdevs_operational": 3, 00:25:23.834 "base_bdevs_list": [ 00:25:23.834 { 00:25:23.834 "name": null, 00:25:23.834 "uuid": "40baa209-da5f-41e7-ad24-762ba1b0204e", 00:25:23.834 "is_configured": false, 00:25:23.834 "data_offset": 0, 00:25:23.834 "data_size": 63488 00:25:23.834 }, 00:25:23.834 { 00:25:23.834 "name": "BaseBdev2", 00:25:23.834 "uuid": "e1b8e2e9-a81d-4187-8281-f8752dbd263e", 00:25:23.834 "is_configured": true, 00:25:23.834 "data_offset": 2048, 00:25:23.834 "data_size": 63488 00:25:23.834 }, 00:25:23.834 { 00:25:23.834 "name": "BaseBdev3", 00:25:23.834 "uuid": "26404c00-c08f-4a6c-a52a-3d1d49806f23", 00:25:23.834 "is_configured": true, 00:25:23.834 "data_offset": 2048, 00:25:23.834 "data_size": 63488 00:25:23.834 } 00:25:23.834 ] 00:25:23.834 }' 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:23.834 13:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 40baa209-da5f-41e7-ad24-762ba1b0204e 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.402 [2024-10-28 13:36:38.444268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:24.402 [2024-10-28 13:36:38.444587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:24.402 [2024-10-28 13:36:38.444609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:24.402 [2024-10-28 13:36:38.444948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:25:24.402 NewBaseBdev 00:25:24.402 [2024-10-28 13:36:38.445127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:24.402 [2024-10-28 13:36:38.445178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:24.402 [2024-10-28 13:36:38.445335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.402 [ 00:25:24.402 { 00:25:24.402 "name": "NewBaseBdev", 00:25:24.402 "aliases": [ 00:25:24.402 "40baa209-da5f-41e7-ad24-762ba1b0204e" 00:25:24.402 ], 00:25:24.402 "product_name": "Malloc disk", 00:25:24.402 "block_size": 512, 00:25:24.402 "num_blocks": 65536, 00:25:24.402 "uuid": "40baa209-da5f-41e7-ad24-762ba1b0204e", 00:25:24.402 "assigned_rate_limits": { 00:25:24.402 "rw_ios_per_sec": 0, 00:25:24.402 "rw_mbytes_per_sec": 0, 00:25:24.402 "r_mbytes_per_sec": 0, 00:25:24.402 "w_mbytes_per_sec": 0 00:25:24.402 }, 00:25:24.402 "claimed": true, 00:25:24.402 "claim_type": "exclusive_write", 00:25:24.402 "zoned": false, 00:25:24.402 "supported_io_types": { 00:25:24.402 "read": true, 00:25:24.402 "write": true, 00:25:24.402 "unmap": true, 00:25:24.402 "flush": true, 00:25:24.402 "reset": true, 00:25:24.402 "nvme_admin": false, 00:25:24.402 "nvme_io": false, 00:25:24.402 "nvme_io_md": false, 00:25:24.402 "write_zeroes": true, 00:25:24.402 "zcopy": true, 00:25:24.402 "get_zone_info": false, 00:25:24.402 "zone_management": false, 00:25:24.402 "zone_append": false, 00:25:24.402 "compare": false, 00:25:24.402 "compare_and_write": false, 00:25:24.402 "abort": true, 00:25:24.402 "seek_hole": false, 00:25:24.402 "seek_data": false, 00:25:24.402 "copy": true, 00:25:24.402 "nvme_iov_md": false 00:25:24.402 }, 00:25:24.402 "memory_domains": [ 00:25:24.402 { 00:25:24.402 "dma_device_id": "system", 00:25:24.402 "dma_device_type": 1 00:25:24.402 }, 00:25:24.402 { 00:25:24.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.402 "dma_device_type": 2 00:25:24.402 } 00:25:24.402 ], 00:25:24.402 "driver_specific": {} 00:25:24.402 } 00:25:24.402 ] 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:24.402 "name": "Existed_Raid", 00:25:24.402 "uuid": "4dd15350-f53d-42c6-9ba3-948348e92c14", 00:25:24.402 "strip_size_kb": 64, 00:25:24.402 "state": "online", 00:25:24.402 "raid_level": "raid0", 00:25:24.402 "superblock": true, 00:25:24.402 "num_base_bdevs": 3, 00:25:24.402 "num_base_bdevs_discovered": 3, 00:25:24.402 "num_base_bdevs_operational": 3, 00:25:24.402 "base_bdevs_list": [ 00:25:24.402 { 00:25:24.402 "name": "NewBaseBdev", 00:25:24.402 "uuid": "40baa209-da5f-41e7-ad24-762ba1b0204e", 00:25:24.402 "is_configured": true, 00:25:24.402 "data_offset": 2048, 00:25:24.402 "data_size": 63488 00:25:24.402 }, 00:25:24.402 { 00:25:24.402 "name": "BaseBdev2", 00:25:24.402 "uuid": "e1b8e2e9-a81d-4187-8281-f8752dbd263e", 00:25:24.402 "is_configured": true, 00:25:24.402 "data_offset": 2048, 00:25:24.402 "data_size": 63488 00:25:24.402 }, 00:25:24.402 { 00:25:24.402 "name": "BaseBdev3", 00:25:24.402 "uuid": "26404c00-c08f-4a6c-a52a-3d1d49806f23", 00:25:24.402 "is_configured": true, 00:25:24.402 "data_offset": 2048, 00:25:24.402 "data_size": 63488 00:25:24.402 } 00:25:24.402 ] 00:25:24.402 }' 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:24.402 13:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.971 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:24.971 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:24.971 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:24.971 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:24.971 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:24.971 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:24.971 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:24.971 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.971 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.971 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:24.971 [2024-10-28 13:36:39.025541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:24.971 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.971 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:24.971 "name": "Existed_Raid", 00:25:24.971 "aliases": [ 00:25:24.971 "4dd15350-f53d-42c6-9ba3-948348e92c14" 00:25:24.971 ], 00:25:24.971 "product_name": "Raid Volume", 00:25:24.971 "block_size": 512, 00:25:24.971 "num_blocks": 190464, 00:25:24.971 "uuid": "4dd15350-f53d-42c6-9ba3-948348e92c14", 00:25:24.971 "assigned_rate_limits": { 00:25:24.971 "rw_ios_per_sec": 0, 00:25:24.971 "rw_mbytes_per_sec": 0, 00:25:24.971 "r_mbytes_per_sec": 0, 00:25:24.971 "w_mbytes_per_sec": 0 00:25:24.971 }, 00:25:24.971 "claimed": false, 00:25:24.971 "zoned": false, 00:25:24.971 "supported_io_types": { 00:25:24.971 "read": true, 00:25:24.971 "write": true, 00:25:24.971 "unmap": true, 00:25:24.971 "flush": true, 00:25:24.971 "reset": true, 00:25:24.971 "nvme_admin": false, 00:25:24.971 "nvme_io": false, 00:25:24.971 "nvme_io_md": false, 00:25:24.971 "write_zeroes": true, 00:25:24.971 "zcopy": false, 00:25:24.971 "get_zone_info": false, 00:25:24.971 "zone_management": false, 00:25:24.971 "zone_append": false, 00:25:24.971 "compare": false, 00:25:24.971 "compare_and_write": false, 00:25:24.971 "abort": false, 00:25:24.971 "seek_hole": false, 00:25:24.971 "seek_data": false, 00:25:24.971 "copy": false, 00:25:24.971 "nvme_iov_md": false 00:25:24.971 }, 00:25:24.971 "memory_domains": [ 00:25:24.971 { 00:25:24.971 "dma_device_id": "system", 00:25:24.971 "dma_device_type": 1 00:25:24.971 }, 00:25:24.971 { 00:25:24.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.971 "dma_device_type": 2 00:25:24.971 }, 00:25:24.971 { 00:25:24.971 "dma_device_id": "system", 00:25:24.971 "dma_device_type": 1 00:25:24.971 }, 00:25:24.972 { 00:25:24.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.972 "dma_device_type": 2 00:25:24.972 }, 00:25:24.972 { 00:25:24.972 "dma_device_id": "system", 00:25:24.972 "dma_device_type": 1 00:25:24.972 }, 00:25:24.972 { 00:25:24.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.972 "dma_device_type": 2 00:25:24.972 } 00:25:24.972 ], 00:25:24.972 "driver_specific": { 00:25:24.972 "raid": { 00:25:24.972 "uuid": "4dd15350-f53d-42c6-9ba3-948348e92c14", 00:25:24.972 "strip_size_kb": 64, 00:25:24.972 "state": "online", 00:25:24.972 "raid_level": "raid0", 00:25:24.972 "superblock": true, 00:25:24.972 "num_base_bdevs": 3, 00:25:24.972 "num_base_bdevs_discovered": 3, 00:25:24.972 "num_base_bdevs_operational": 3, 00:25:24.972 "base_bdevs_list": [ 00:25:24.972 { 00:25:24.972 "name": "NewBaseBdev", 00:25:24.972 "uuid": "40baa209-da5f-41e7-ad24-762ba1b0204e", 00:25:24.972 "is_configured": true, 00:25:24.972 "data_offset": 2048, 00:25:24.972 "data_size": 63488 00:25:24.972 }, 00:25:24.972 { 00:25:24.972 "name": "BaseBdev2", 00:25:24.972 "uuid": "e1b8e2e9-a81d-4187-8281-f8752dbd263e", 00:25:24.972 "is_configured": true, 00:25:24.972 "data_offset": 2048, 00:25:24.972 "data_size": 63488 00:25:24.972 }, 00:25:24.972 { 00:25:24.972 "name": "BaseBdev3", 00:25:24.972 "uuid": "26404c00-c08f-4a6c-a52a-3d1d49806f23", 00:25:24.972 "is_configured": true, 00:25:24.972 "data_offset": 2048, 00:25:24.972 "data_size": 63488 00:25:24.972 } 00:25:24.972 ] 00:25:24.972 } 00:25:24.972 } 00:25:24.972 }' 00:25:24.972 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:25.231 BaseBdev2 00:25:25.231 BaseBdev3' 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.231 [2024-10-28 13:36:39.369243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:25.231 [2024-10-28 13:36:39.369332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:25.231 [2024-10-28 13:36:39.369467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:25.231 [2024-10-28 13:36:39.369566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:25.231 [2024-10-28 13:36:39.369587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77294 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77294 ']' 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77294 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:25.231 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77294 00:25:25.490 killing process with pid 77294 00:25:25.490 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:25.490 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:25.490 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77294' 00:25:25.490 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77294 00:25:25.490 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77294 00:25:25.490 [2024-10-28 13:36:39.411859] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:25.490 [2024-10-28 13:36:39.464582] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:25.750 13:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:25:25.750 00:25:25.750 real 0m10.722s 00:25:25.750 user 0m18.712s 00:25:25.750 sys 0m1.592s 00:25:25.750 ************************************ 00:25:25.750 END TEST raid_state_function_test_sb 00:25:25.750 ************************************ 00:25:25.750 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:25.750 13:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.750 13:36:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:25:25.750 13:36:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:25.750 13:36:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:25.750 13:36:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:25.750 ************************************ 00:25:25.750 START TEST raid_superblock_test 00:25:25.750 ************************************ 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77914 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77914 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 77914 ']' 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:25.750 13:36:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.009 [2024-10-28 13:36:39.956619] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:25:26.009 [2024-10-28 13:36:39.957643] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77914 ] 00:25:26.009 [2024-10-28 13:36:40.120819] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:26.009 [2024-10-28 13:36:40.150792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.268 [2024-10-28 13:36:40.220932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.268 [2024-10-28 13:36:40.300793] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:26.268 [2024-10-28 13:36:40.300861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.835 malloc1 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:26.835 13:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.836 13:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.836 [2024-10-28 13:36:40.978006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:26.836 [2024-10-28 13:36:40.978121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.836 [2024-10-28 13:36:40.978186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:26.836 [2024-10-28 13:36:40.978212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.836 [2024-10-28 13:36:40.981582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.836 [2024-10-28 13:36:40.981637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:26.836 pt1 00:25:26.836 13:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.836 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:26.836 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:26.836 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:26.836 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:26.836 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:26.836 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:26.836 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:26.836 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:26.836 13:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:25:26.836 13:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.836 13:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.095 malloc2 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.095 [2024-10-28 13:36:41.011045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:27.095 [2024-10-28 13:36:41.011171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.095 [2024-10-28 13:36:41.011211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:27.095 [2024-10-28 13:36:41.011229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.095 [2024-10-28 13:36:41.014479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.095 [2024-10-28 13:36:41.014530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:27.095 pt2 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.095 malloc3 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.095 [2024-10-28 13:36:41.043469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:27.095 [2024-10-28 13:36:41.043869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.095 [2024-10-28 13:36:41.043964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:27.095 [2024-10-28 13:36:41.044094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.095 [2024-10-28 13:36:41.047435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.095 [2024-10-28 13:36:41.047628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:27.095 pt3 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.095 [2024-10-28 13:36:41.056002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:27.095 [2024-10-28 13:36:41.058961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:27.095 [2024-10-28 13:36:41.059074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:27.095 [2024-10-28 13:36:41.059325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:25:27.095 [2024-10-28 13:36:41.059351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:27.095 [2024-10-28 13:36:41.059779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:27.095 [2024-10-28 13:36:41.060031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:25:27.095 [2024-10-28 13:36:41.060051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:25:27.095 [2024-10-28 13:36:41.060381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.095 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:27.096 "name": "raid_bdev1", 00:25:27.096 "uuid": "108b1669-5b98-4572-bdb3-861ed03be555", 00:25:27.096 "strip_size_kb": 64, 00:25:27.096 "state": "online", 00:25:27.096 "raid_level": "raid0", 00:25:27.096 "superblock": true, 00:25:27.096 "num_base_bdevs": 3, 00:25:27.096 "num_base_bdevs_discovered": 3, 00:25:27.096 "num_base_bdevs_operational": 3, 00:25:27.096 "base_bdevs_list": [ 00:25:27.096 { 00:25:27.096 "name": "pt1", 00:25:27.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:27.096 "is_configured": true, 00:25:27.096 "data_offset": 2048, 00:25:27.096 "data_size": 63488 00:25:27.096 }, 00:25:27.096 { 00:25:27.096 "name": "pt2", 00:25:27.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:27.096 "is_configured": true, 00:25:27.096 "data_offset": 2048, 00:25:27.096 "data_size": 63488 00:25:27.096 }, 00:25:27.096 { 00:25:27.096 "name": "pt3", 00:25:27.096 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:27.096 "is_configured": true, 00:25:27.096 "data_offset": 2048, 00:25:27.096 "data_size": 63488 00:25:27.096 } 00:25:27.096 ] 00:25:27.096 }' 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:27.096 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.665 [2024-10-28 13:36:41.616973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:27.665 "name": "raid_bdev1", 00:25:27.665 "aliases": [ 00:25:27.665 "108b1669-5b98-4572-bdb3-861ed03be555" 00:25:27.665 ], 00:25:27.665 "product_name": "Raid Volume", 00:25:27.665 "block_size": 512, 00:25:27.665 "num_blocks": 190464, 00:25:27.665 "uuid": "108b1669-5b98-4572-bdb3-861ed03be555", 00:25:27.665 "assigned_rate_limits": { 00:25:27.665 "rw_ios_per_sec": 0, 00:25:27.665 "rw_mbytes_per_sec": 0, 00:25:27.665 "r_mbytes_per_sec": 0, 00:25:27.665 "w_mbytes_per_sec": 0 00:25:27.665 }, 00:25:27.665 "claimed": false, 00:25:27.665 "zoned": false, 00:25:27.665 "supported_io_types": { 00:25:27.665 "read": true, 00:25:27.665 "write": true, 00:25:27.665 "unmap": true, 00:25:27.665 "flush": true, 00:25:27.665 "reset": true, 00:25:27.665 "nvme_admin": false, 00:25:27.665 "nvme_io": false, 00:25:27.665 "nvme_io_md": false, 00:25:27.665 "write_zeroes": true, 00:25:27.665 "zcopy": false, 00:25:27.665 "get_zone_info": false, 00:25:27.665 "zone_management": false, 00:25:27.665 "zone_append": false, 00:25:27.665 "compare": false, 00:25:27.665 "compare_and_write": false, 00:25:27.665 "abort": false, 00:25:27.665 "seek_hole": false, 00:25:27.665 "seek_data": false, 00:25:27.665 "copy": false, 00:25:27.665 "nvme_iov_md": false 00:25:27.665 }, 00:25:27.665 "memory_domains": [ 00:25:27.665 { 00:25:27.665 "dma_device_id": "system", 00:25:27.665 "dma_device_type": 1 00:25:27.665 }, 00:25:27.665 { 00:25:27.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.665 "dma_device_type": 2 00:25:27.665 }, 00:25:27.665 { 00:25:27.665 "dma_device_id": "system", 00:25:27.665 "dma_device_type": 1 00:25:27.665 }, 00:25:27.665 { 00:25:27.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.665 "dma_device_type": 2 00:25:27.665 }, 00:25:27.665 { 00:25:27.665 "dma_device_id": "system", 00:25:27.665 "dma_device_type": 1 00:25:27.665 }, 00:25:27.665 { 00:25:27.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.665 "dma_device_type": 2 00:25:27.665 } 00:25:27.665 ], 00:25:27.665 "driver_specific": { 00:25:27.665 "raid": { 00:25:27.665 "uuid": "108b1669-5b98-4572-bdb3-861ed03be555", 00:25:27.665 "strip_size_kb": 64, 00:25:27.665 "state": "online", 00:25:27.665 "raid_level": "raid0", 00:25:27.665 "superblock": true, 00:25:27.665 "num_base_bdevs": 3, 00:25:27.665 "num_base_bdevs_discovered": 3, 00:25:27.665 "num_base_bdevs_operational": 3, 00:25:27.665 "base_bdevs_list": [ 00:25:27.665 { 00:25:27.665 "name": "pt1", 00:25:27.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:27.665 "is_configured": true, 00:25:27.665 "data_offset": 2048, 00:25:27.665 "data_size": 63488 00:25:27.665 }, 00:25:27.665 { 00:25:27.665 "name": "pt2", 00:25:27.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:27.665 "is_configured": true, 00:25:27.665 "data_offset": 2048, 00:25:27.665 "data_size": 63488 00:25:27.665 }, 00:25:27.665 { 00:25:27.665 "name": "pt3", 00:25:27.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:27.665 "is_configured": true, 00:25:27.665 "data_offset": 2048, 00:25:27.665 "data_size": 63488 00:25:27.665 } 00:25:27.665 ] 00:25:27.665 } 00:25:27.665 } 00:25:27.665 }' 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:27.665 pt2 00:25:27.665 pt3' 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:27.665 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:27.924 [2024-10-28 13:36:41.937100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=108b1669-5b98-4572-bdb3-861ed03be555 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 108b1669-5b98-4572-bdb3-861ed03be555 ']' 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.924 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.924 [2024-10-28 13:36:41.976630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:27.924 [2024-10-28 13:36:41.976692] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:27.924 [2024-10-28 13:36:41.976829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:27.924 [2024-10-28 13:36:41.976938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:27.925 [2024-10-28 13:36:41.976971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:25:27.925 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.925 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:27.925 13:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:27.925 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.925 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.925 13:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.925 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.184 [2024-10-28 13:36:42.128734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:28.184 [2024-10-28 13:36:42.131737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:28.184 [2024-10-28 13:36:42.131830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:28.184 [2024-10-28 13:36:42.131925] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:28.184 [2024-10-28 13:36:42.132020] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:28.184 [2024-10-28 13:36:42.132064] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:28.184 [2024-10-28 13:36:42.132098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:28.184 [2024-10-28 13:36:42.132117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:25:28.184 request: 00:25:28.184 { 00:25:28.184 "name": "raid_bdev1", 00:25:28.184 "raid_level": "raid0", 00:25:28.184 "base_bdevs": [ 00:25:28.184 "malloc1", 00:25:28.184 "malloc2", 00:25:28.184 "malloc3" 00:25:28.184 ], 00:25:28.184 "strip_size_kb": 64, 00:25:28.184 "superblock": false, 00:25:28.184 "method": "bdev_raid_create", 00:25:28.184 "req_id": 1 00:25:28.184 } 00:25:28.184 Got JSON-RPC error response 00:25:28.184 response: 00:25:28.184 { 00:25:28.184 "code": -17, 00:25:28.184 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:28.184 } 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.184 [2024-10-28 13:36:42.192728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:28.184 [2024-10-28 13:36:42.193007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:28.184 [2024-10-28 13:36:42.193102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:28.184 [2024-10-28 13:36:42.193317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:28.184 [2024-10-28 13:36:42.196737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:28.184 [2024-10-28 13:36:42.196913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:28.184 [2024-10-28 13:36:42.197201] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:28.184 [2024-10-28 13:36:42.197395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:28.184 pt1 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:28.184 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:28.185 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:28.185 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:28.185 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:28.185 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:28.185 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:28.185 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:28.185 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.185 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.185 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.185 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.185 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.185 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:28.185 "name": "raid_bdev1", 00:25:28.185 "uuid": "108b1669-5b98-4572-bdb3-861ed03be555", 00:25:28.185 "strip_size_kb": 64, 00:25:28.185 "state": "configuring", 00:25:28.185 "raid_level": "raid0", 00:25:28.185 "superblock": true, 00:25:28.185 "num_base_bdevs": 3, 00:25:28.185 "num_base_bdevs_discovered": 1, 00:25:28.185 "num_base_bdevs_operational": 3, 00:25:28.185 "base_bdevs_list": [ 00:25:28.185 { 00:25:28.185 "name": "pt1", 00:25:28.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:28.185 "is_configured": true, 00:25:28.185 "data_offset": 2048, 00:25:28.185 "data_size": 63488 00:25:28.185 }, 00:25:28.185 { 00:25:28.185 "name": null, 00:25:28.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:28.185 "is_configured": false, 00:25:28.185 "data_offset": 2048, 00:25:28.185 "data_size": 63488 00:25:28.185 }, 00:25:28.185 { 00:25:28.185 "name": null, 00:25:28.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:28.185 "is_configured": false, 00:25:28.185 "data_offset": 2048, 00:25:28.185 "data_size": 63488 00:25:28.185 } 00:25:28.185 ] 00:25:28.185 }' 00:25:28.185 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:28.185 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.753 [2024-10-28 13:36:42.714282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:28.753 [2024-10-28 13:36:42.714431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:28.753 [2024-10-28 13:36:42.714494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:28.753 [2024-10-28 13:36:42.714515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:28.753 [2024-10-28 13:36:42.715223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:28.753 [2024-10-28 13:36:42.715537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:28.753 [2024-10-28 13:36:42.715712] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:28.753 [2024-10-28 13:36:42.715768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:28.753 pt2 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.753 [2024-10-28 13:36:42.722309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:28.753 "name": "raid_bdev1", 00:25:28.753 "uuid": "108b1669-5b98-4572-bdb3-861ed03be555", 00:25:28.753 "strip_size_kb": 64, 00:25:28.753 "state": "configuring", 00:25:28.753 "raid_level": "raid0", 00:25:28.753 "superblock": true, 00:25:28.753 "num_base_bdevs": 3, 00:25:28.753 "num_base_bdevs_discovered": 1, 00:25:28.753 "num_base_bdevs_operational": 3, 00:25:28.753 "base_bdevs_list": [ 00:25:28.753 { 00:25:28.753 "name": "pt1", 00:25:28.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:28.753 "is_configured": true, 00:25:28.753 "data_offset": 2048, 00:25:28.753 "data_size": 63488 00:25:28.753 }, 00:25:28.753 { 00:25:28.753 "name": null, 00:25:28.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:28.753 "is_configured": false, 00:25:28.753 "data_offset": 0, 00:25:28.753 "data_size": 63488 00:25:28.753 }, 00:25:28.753 { 00:25:28.753 "name": null, 00:25:28.753 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:28.753 "is_configured": false, 00:25:28.753 "data_offset": 2048, 00:25:28.753 "data_size": 63488 00:25:28.753 } 00:25:28.753 ] 00:25:28.753 }' 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:28.753 13:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.322 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:29.322 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:29.322 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:29.322 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.322 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.322 [2024-10-28 13:36:43.222498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:29.322 [2024-10-28 13:36:43.222680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.322 [2024-10-28 13:36:43.222720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:29.322 [2024-10-28 13:36:43.222743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.322 [2024-10-28 13:36:43.223456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.322 [2024-10-28 13:36:43.223516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:29.322 [2024-10-28 13:36:43.223661] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:29.322 [2024-10-28 13:36:43.223721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:29.322 pt2 00:25:29.322 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.322 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:29.322 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:29.322 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:29.322 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.322 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.322 [2024-10-28 13:36:43.230351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:29.322 [2024-10-28 13:36:43.230438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:29.322 [2024-10-28 13:36:43.230468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:29.323 [2024-10-28 13:36:43.230489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:29.323 [2024-10-28 13:36:43.230955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:29.323 [2024-10-28 13:36:43.231011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:29.323 [2024-10-28 13:36:43.231112] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:29.323 [2024-10-28 13:36:43.231172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:29.323 [2024-10-28 13:36:43.231325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:29.323 [2024-10-28 13:36:43.231351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:29.323 [2024-10-28 13:36:43.231731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:25:29.323 [2024-10-28 13:36:43.231947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:29.323 [2024-10-28 13:36:43.231965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:25:29.323 [2024-10-28 13:36:43.232117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.323 pt3 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:29.323 "name": "raid_bdev1", 00:25:29.323 "uuid": "108b1669-5b98-4572-bdb3-861ed03be555", 00:25:29.323 "strip_size_kb": 64, 00:25:29.323 "state": "online", 00:25:29.323 "raid_level": "raid0", 00:25:29.323 "superblock": true, 00:25:29.323 "num_base_bdevs": 3, 00:25:29.323 "num_base_bdevs_discovered": 3, 00:25:29.323 "num_base_bdevs_operational": 3, 00:25:29.323 "base_bdevs_list": [ 00:25:29.323 { 00:25:29.323 "name": "pt1", 00:25:29.323 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:29.323 "is_configured": true, 00:25:29.323 "data_offset": 2048, 00:25:29.323 "data_size": 63488 00:25:29.323 }, 00:25:29.323 { 00:25:29.323 "name": "pt2", 00:25:29.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:29.323 "is_configured": true, 00:25:29.323 "data_offset": 2048, 00:25:29.323 "data_size": 63488 00:25:29.323 }, 00:25:29.323 { 00:25:29.323 "name": "pt3", 00:25:29.323 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:29.323 "is_configured": true, 00:25:29.323 "data_offset": 2048, 00:25:29.323 "data_size": 63488 00:25:29.323 } 00:25:29.323 ] 00:25:29.323 }' 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:29.323 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.582 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:29.582 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:29.582 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:29.582 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:29.582 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:29.582 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:29.582 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:29.582 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:29.582 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.582 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.582 [2024-10-28 13:36:43.718966] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:29.582 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:29.842 "name": "raid_bdev1", 00:25:29.842 "aliases": [ 00:25:29.842 "108b1669-5b98-4572-bdb3-861ed03be555" 00:25:29.842 ], 00:25:29.842 "product_name": "Raid Volume", 00:25:29.842 "block_size": 512, 00:25:29.842 "num_blocks": 190464, 00:25:29.842 "uuid": "108b1669-5b98-4572-bdb3-861ed03be555", 00:25:29.842 "assigned_rate_limits": { 00:25:29.842 "rw_ios_per_sec": 0, 00:25:29.842 "rw_mbytes_per_sec": 0, 00:25:29.842 "r_mbytes_per_sec": 0, 00:25:29.842 "w_mbytes_per_sec": 0 00:25:29.842 }, 00:25:29.842 "claimed": false, 00:25:29.842 "zoned": false, 00:25:29.842 "supported_io_types": { 00:25:29.842 "read": true, 00:25:29.842 "write": true, 00:25:29.842 "unmap": true, 00:25:29.842 "flush": true, 00:25:29.842 "reset": true, 00:25:29.842 "nvme_admin": false, 00:25:29.842 "nvme_io": false, 00:25:29.842 "nvme_io_md": false, 00:25:29.842 "write_zeroes": true, 00:25:29.842 "zcopy": false, 00:25:29.842 "get_zone_info": false, 00:25:29.842 "zone_management": false, 00:25:29.842 "zone_append": false, 00:25:29.842 "compare": false, 00:25:29.842 "compare_and_write": false, 00:25:29.842 "abort": false, 00:25:29.842 "seek_hole": false, 00:25:29.842 "seek_data": false, 00:25:29.842 "copy": false, 00:25:29.842 "nvme_iov_md": false 00:25:29.842 }, 00:25:29.842 "memory_domains": [ 00:25:29.842 { 00:25:29.842 "dma_device_id": "system", 00:25:29.842 "dma_device_type": 1 00:25:29.842 }, 00:25:29.842 { 00:25:29.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.842 "dma_device_type": 2 00:25:29.842 }, 00:25:29.842 { 00:25:29.842 "dma_device_id": "system", 00:25:29.842 "dma_device_type": 1 00:25:29.842 }, 00:25:29.842 { 00:25:29.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.842 "dma_device_type": 2 00:25:29.842 }, 00:25:29.842 { 00:25:29.842 "dma_device_id": "system", 00:25:29.842 "dma_device_type": 1 00:25:29.842 }, 00:25:29.842 { 00:25:29.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.842 "dma_device_type": 2 00:25:29.842 } 00:25:29.842 ], 00:25:29.842 "driver_specific": { 00:25:29.842 "raid": { 00:25:29.842 "uuid": "108b1669-5b98-4572-bdb3-861ed03be555", 00:25:29.842 "strip_size_kb": 64, 00:25:29.842 "state": "online", 00:25:29.842 "raid_level": "raid0", 00:25:29.842 "superblock": true, 00:25:29.842 "num_base_bdevs": 3, 00:25:29.842 "num_base_bdevs_discovered": 3, 00:25:29.842 "num_base_bdevs_operational": 3, 00:25:29.842 "base_bdevs_list": [ 00:25:29.842 { 00:25:29.842 "name": "pt1", 00:25:29.842 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:29.842 "is_configured": true, 00:25:29.842 "data_offset": 2048, 00:25:29.842 "data_size": 63488 00:25:29.842 }, 00:25:29.842 { 00:25:29.842 "name": "pt2", 00:25:29.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:29.842 "is_configured": true, 00:25:29.842 "data_offset": 2048, 00:25:29.842 "data_size": 63488 00:25:29.842 }, 00:25:29.842 { 00:25:29.842 "name": "pt3", 00:25:29.842 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:29.842 "is_configured": true, 00:25:29.842 "data_offset": 2048, 00:25:29.842 "data_size": 63488 00:25:29.842 } 00:25:29.842 ] 00:25:29.842 } 00:25:29.842 } 00:25:29.842 }' 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:29.842 pt2 00:25:29.842 pt3' 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:29.842 13:36:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.102 [2024-10-28 13:36:44.039121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 108b1669-5b98-4572-bdb3-861ed03be555 '!=' 108b1669-5b98-4572-bdb3-861ed03be555 ']' 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77914 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 77914 ']' 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 77914 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77914 00:25:30.102 killing process with pid 77914 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77914' 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 77914 00:25:30.102 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 77914 00:25:30.102 [2024-10-28 13:36:44.135486] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:30.102 [2024-10-28 13:36:44.135721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:30.102 [2024-10-28 13:36:44.135828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:30.102 [2024-10-28 13:36:44.135854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:25:30.102 [2024-10-28 13:36:44.200228] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:30.670 ************************************ 00:25:30.670 END TEST raid_superblock_test 00:25:30.670 ************************************ 00:25:30.670 13:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:25:30.670 00:25:30.670 real 0m4.698s 00:25:30.670 user 0m7.539s 00:25:30.670 sys 0m0.869s 00:25:30.670 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:30.670 13:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.670 13:36:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:25:30.670 13:36:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:30.670 13:36:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:30.670 13:36:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:30.671 ************************************ 00:25:30.671 START TEST raid_read_error_test 00:25:30.671 ************************************ 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jWPZg8womA 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78162 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78162 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78162 ']' 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:30.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:30.671 13:36:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.671 [2024-10-28 13:36:44.702361] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:25:30.671 [2024-10-28 13:36:44.702559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78162 ] 00:25:30.930 [2024-10-28 13:36:44.854551] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:30.930 [2024-10-28 13:36:44.884865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.930 [2024-10-28 13:36:44.962516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.930 [2024-10-28 13:36:45.049456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:30.930 [2024-10-28 13:36:45.049588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.867 BaseBdev1_malloc 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.867 true 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.867 [2024-10-28 13:36:45.812111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:31.867 [2024-10-28 13:36:45.812237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.867 [2024-10-28 13:36:45.812273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:31.867 [2024-10-28 13:36:45.812309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.867 [2024-10-28 13:36:45.815858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.867 [2024-10-28 13:36:45.816050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:31.867 BaseBdev1 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.867 BaseBdev2_malloc 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.867 true 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.867 [2024-10-28 13:36:45.860988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:31.867 [2024-10-28 13:36:45.861349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.867 [2024-10-28 13:36:45.861392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:31.867 [2024-10-28 13:36:45.861416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.867 [2024-10-28 13:36:45.864593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.867 [2024-10-28 13:36:45.864844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:31.867 BaseBdev2 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.867 BaseBdev3_malloc 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.867 true 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.867 [2024-10-28 13:36:45.905535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:31.867 [2024-10-28 13:36:45.905644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.867 [2024-10-28 13:36:45.905680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:31.867 [2024-10-28 13:36:45.905702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.867 [2024-10-28 13:36:45.908884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.867 [2024-10-28 13:36:45.908943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:31.867 BaseBdev3 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.867 [2024-10-28 13:36:45.917661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:31.867 [2024-10-28 13:36:45.920539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:31.867 [2024-10-28 13:36:45.920672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:31.867 [2024-10-28 13:36:45.920980] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:31.867 [2024-10-28 13:36:45.921021] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:31.867 [2024-10-28 13:36:45.921424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:31.867 [2024-10-28 13:36:45.921647] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:31.867 [2024-10-28 13:36:45.921692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:31.867 [2024-10-28 13:36:45.921936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.867 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:31.867 "name": "raid_bdev1", 00:25:31.867 "uuid": "3fd84874-ca08-4b52-8f3b-03a54ce4731c", 00:25:31.867 "strip_size_kb": 64, 00:25:31.867 "state": "online", 00:25:31.868 "raid_level": "raid0", 00:25:31.868 "superblock": true, 00:25:31.868 "num_base_bdevs": 3, 00:25:31.868 "num_base_bdevs_discovered": 3, 00:25:31.868 "num_base_bdevs_operational": 3, 00:25:31.868 "base_bdevs_list": [ 00:25:31.868 { 00:25:31.868 "name": "BaseBdev1", 00:25:31.868 "uuid": "5a1f6c12-e2f5-5aef-90bc-6be7370bb51c", 00:25:31.868 "is_configured": true, 00:25:31.868 "data_offset": 2048, 00:25:31.868 "data_size": 63488 00:25:31.868 }, 00:25:31.868 { 00:25:31.868 "name": "BaseBdev2", 00:25:31.868 "uuid": "58f2b4f2-3639-5b86-851e-dc6d951a50b8", 00:25:31.868 "is_configured": true, 00:25:31.868 "data_offset": 2048, 00:25:31.868 "data_size": 63488 00:25:31.868 }, 00:25:31.868 { 00:25:31.868 "name": "BaseBdev3", 00:25:31.868 "uuid": "94477af9-98db-51d1-9032-c7145d01006b", 00:25:31.868 "is_configured": true, 00:25:31.868 "data_offset": 2048, 00:25:31.868 "data_size": 63488 00:25:31.868 } 00:25:31.868 ] 00:25:31.868 }' 00:25:31.868 13:36:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:31.868 13:36:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.435 13:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:32.435 13:36:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:32.435 [2024-10-28 13:36:46.586939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:33.373 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:33.373 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.373 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.373 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.373 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:33.374 "name": "raid_bdev1", 00:25:33.374 "uuid": "3fd84874-ca08-4b52-8f3b-03a54ce4731c", 00:25:33.374 "strip_size_kb": 64, 00:25:33.374 "state": "online", 00:25:33.374 "raid_level": "raid0", 00:25:33.374 "superblock": true, 00:25:33.374 "num_base_bdevs": 3, 00:25:33.374 "num_base_bdevs_discovered": 3, 00:25:33.374 "num_base_bdevs_operational": 3, 00:25:33.374 "base_bdevs_list": [ 00:25:33.374 { 00:25:33.374 "name": "BaseBdev1", 00:25:33.374 "uuid": "5a1f6c12-e2f5-5aef-90bc-6be7370bb51c", 00:25:33.374 "is_configured": true, 00:25:33.374 "data_offset": 2048, 00:25:33.374 "data_size": 63488 00:25:33.374 }, 00:25:33.374 { 00:25:33.374 "name": "BaseBdev2", 00:25:33.374 "uuid": "58f2b4f2-3639-5b86-851e-dc6d951a50b8", 00:25:33.374 "is_configured": true, 00:25:33.374 "data_offset": 2048, 00:25:33.374 "data_size": 63488 00:25:33.374 }, 00:25:33.374 { 00:25:33.374 "name": "BaseBdev3", 00:25:33.374 "uuid": "94477af9-98db-51d1-9032-c7145d01006b", 00:25:33.374 "is_configured": true, 00:25:33.374 "data_offset": 2048, 00:25:33.374 "data_size": 63488 00:25:33.374 } 00:25:33.374 ] 00:25:33.374 }' 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:33.374 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.944 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:33.944 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.944 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.944 [2024-10-28 13:36:47.961769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:33.944 [2024-10-28 13:36:47.961852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:33.944 [2024-10-28 13:36:47.965151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:33.944 [2024-10-28 13:36:47.965236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.944 [2024-10-28 13:36:47.965300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:33.944 [2024-10-28 13:36:47.965318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:33.944 { 00:25:33.944 "results": [ 00:25:33.944 { 00:25:33.944 "job": "raid_bdev1", 00:25:33.944 "core_mask": "0x1", 00:25:33.944 "workload": "randrw", 00:25:33.944 "percentage": 50, 00:25:33.944 "status": "finished", 00:25:33.944 "queue_depth": 1, 00:25:33.944 "io_size": 131072, 00:25:33.944 "runtime": 1.371584, 00:25:33.944 "iops": 9050.849237086464, 00:25:33.944 "mibps": 1131.356154635808, 00:25:33.944 "io_failed": 1, 00:25:33.944 "io_timeout": 0, 00:25:33.944 "avg_latency_us": 154.85890176838868, 00:25:33.944 "min_latency_us": 43.52, 00:25:33.944 "max_latency_us": 1921.3963636363637 00:25:33.944 } 00:25:33.944 ], 00:25:33.944 "core_count": 1 00:25:33.944 } 00:25:33.944 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.944 13:36:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78162 00:25:33.944 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78162 ']' 00:25:33.944 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78162 00:25:33.944 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:25:33.944 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:33.944 13:36:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78162 00:25:33.944 killing process with pid 78162 00:25:33.944 13:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:33.944 13:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:33.944 13:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78162' 00:25:33.944 13:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78162 00:25:33.944 13:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78162 00:25:33.944 [2024-10-28 13:36:48.007499] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:33.944 [2024-10-28 13:36:48.050184] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:34.513 13:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jWPZg8womA 00:25:34.513 13:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:34.513 13:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:34.513 13:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:25:34.513 13:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:25:34.513 13:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:34.513 13:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:34.513 13:36:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:25:34.513 00:25:34.513 real 0m3.801s 00:25:34.513 user 0m5.002s 00:25:34.513 sys 0m0.611s 00:25:34.513 13:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:34.513 13:36:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.513 ************************************ 00:25:34.513 END TEST raid_read_error_test 00:25:34.513 ************************************ 00:25:34.513 13:36:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:25:34.513 13:36:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:34.513 13:36:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:34.513 13:36:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:34.513 ************************************ 00:25:34.513 START TEST raid_write_error_test 00:25:34.513 ************************************ 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0eFXh6pY3S 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78302 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78302 00:25:34.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78302 ']' 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:34.513 13:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.513 [2024-10-28 13:36:48.538094] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:25:34.513 [2024-10-28 13:36:48.538278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78302 ] 00:25:34.772 [2024-10-28 13:36:48.682624] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:34.772 [2024-10-28 13:36:48.707709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.772 [2024-10-28 13:36:48.781009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.772 [2024-10-28 13:36:48.860111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:34.772 [2024-10-28 13:36:48.860257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.708 BaseBdev1_malloc 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.708 true 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.708 [2024-10-28 13:36:49.636868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:35.708 [2024-10-28 13:36:49.637319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.708 [2024-10-28 13:36:49.637366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:35.708 [2024-10-28 13:36:49.637390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.708 [2024-10-28 13:36:49.640612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.708 [2024-10-28 13:36:49.640806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:35.708 BaseBdev1 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.708 BaseBdev2_malloc 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.708 true 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.708 [2024-10-28 13:36:49.684656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:35.708 [2024-10-28 13:36:49.684764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.708 [2024-10-28 13:36:49.684796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:35.708 [2024-10-28 13:36:49.684815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.708 [2024-10-28 13:36:49.687964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.708 [2024-10-28 13:36:49.688309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:35.708 BaseBdev2 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.708 BaseBdev3_malloc 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.708 true 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.708 [2024-10-28 13:36:49.728271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:35.708 [2024-10-28 13:36:49.728371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.708 [2024-10-28 13:36:49.728401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:35.708 [2024-10-28 13:36:49.728420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.708 [2024-10-28 13:36:49.731471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.708 [2024-10-28 13:36:49.731800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:35.708 BaseBdev3 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.708 [2024-10-28 13:36:49.740469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:35.708 [2024-10-28 13:36:49.743438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:35.708 [2024-10-28 13:36:49.743694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:35.708 [2024-10-28 13:36:49.743976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:35.708 [2024-10-28 13:36:49.743996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:35.708 [2024-10-28 13:36:49.744411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:35.708 [2024-10-28 13:36:49.744634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:35.708 [2024-10-28 13:36:49.744655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:35.708 [2024-10-28 13:36:49.744902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:35.708 "name": "raid_bdev1", 00:25:35.708 "uuid": "b39cd35a-c084-44bc-b47a-d41d46399dfb", 00:25:35.708 "strip_size_kb": 64, 00:25:35.708 "state": "online", 00:25:35.708 "raid_level": "raid0", 00:25:35.708 "superblock": true, 00:25:35.708 "num_base_bdevs": 3, 00:25:35.708 "num_base_bdevs_discovered": 3, 00:25:35.708 "num_base_bdevs_operational": 3, 00:25:35.708 "base_bdevs_list": [ 00:25:35.708 { 00:25:35.708 "name": "BaseBdev1", 00:25:35.708 "uuid": "c555c554-c0a3-5baf-a5e9-6da7065cfab6", 00:25:35.708 "is_configured": true, 00:25:35.708 "data_offset": 2048, 00:25:35.708 "data_size": 63488 00:25:35.708 }, 00:25:35.708 { 00:25:35.708 "name": "BaseBdev2", 00:25:35.708 "uuid": "4bad35de-f9a3-5135-8c3a-482bf59f1a17", 00:25:35.708 "is_configured": true, 00:25:35.708 "data_offset": 2048, 00:25:35.708 "data_size": 63488 00:25:35.708 }, 00:25:35.708 { 00:25:35.708 "name": "BaseBdev3", 00:25:35.708 "uuid": "dcd66c3c-7eef-5889-98f2-a0eabbf5c2a5", 00:25:35.708 "is_configured": true, 00:25:35.708 "data_offset": 2048, 00:25:35.708 "data_size": 63488 00:25:35.708 } 00:25:35.708 ] 00:25:35.708 }' 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:35.708 13:36:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.275 13:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:36.275 13:36:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:36.275 [2024-10-28 13:36:50.385793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.210 "name": "raid_bdev1", 00:25:37.210 "uuid": "b39cd35a-c084-44bc-b47a-d41d46399dfb", 00:25:37.210 "strip_size_kb": 64, 00:25:37.210 "state": "online", 00:25:37.210 "raid_level": "raid0", 00:25:37.210 "superblock": true, 00:25:37.210 "num_base_bdevs": 3, 00:25:37.210 "num_base_bdevs_discovered": 3, 00:25:37.210 "num_base_bdevs_operational": 3, 00:25:37.210 "base_bdevs_list": [ 00:25:37.210 { 00:25:37.210 "name": "BaseBdev1", 00:25:37.210 "uuid": "c555c554-c0a3-5baf-a5e9-6da7065cfab6", 00:25:37.210 "is_configured": true, 00:25:37.210 "data_offset": 2048, 00:25:37.210 "data_size": 63488 00:25:37.210 }, 00:25:37.210 { 00:25:37.210 "name": "BaseBdev2", 00:25:37.210 "uuid": "4bad35de-f9a3-5135-8c3a-482bf59f1a17", 00:25:37.210 "is_configured": true, 00:25:37.210 "data_offset": 2048, 00:25:37.210 "data_size": 63488 00:25:37.210 }, 00:25:37.210 { 00:25:37.210 "name": "BaseBdev3", 00:25:37.210 "uuid": "dcd66c3c-7eef-5889-98f2-a0eabbf5c2a5", 00:25:37.210 "is_configured": true, 00:25:37.210 "data_offset": 2048, 00:25:37.210 "data_size": 63488 00:25:37.210 } 00:25:37.210 ] 00:25:37.210 }' 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.210 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.778 [2024-10-28 13:36:51.785618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:37.778 [2024-10-28 13:36:51.785664] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:37.778 [2024-10-28 13:36:51.789339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.778 [2024-10-28 13:36:51.789404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.778 [2024-10-28 13:36:51.789463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:37.778 [2024-10-28 13:36:51.789477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:37.778 { 00:25:37.778 "results": [ 00:25:37.778 { 00:25:37.778 "job": "raid_bdev1", 00:25:37.778 "core_mask": "0x1", 00:25:37.778 "workload": "randrw", 00:25:37.778 "percentage": 50, 00:25:37.778 "status": "finished", 00:25:37.778 "queue_depth": 1, 00:25:37.778 "io_size": 131072, 00:25:37.778 "runtime": 1.396907, 00:25:37.778 "iops": 9685.68415792891, 00:25:37.778 "mibps": 1210.7105197411138, 00:25:37.778 "io_failed": 1, 00:25:37.778 "io_timeout": 0, 00:25:37.778 "avg_latency_us": 145.40685536915234, 00:25:37.778 "min_latency_us": 28.276363636363637, 00:25:37.778 "max_latency_us": 1966.08 00:25:37.778 } 00:25:37.778 ], 00:25:37.778 "core_count": 1 00:25:37.778 } 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78302 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78302 ']' 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78302 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78302 00:25:37.778 killing process with pid 78302 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78302' 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78302 00:25:37.778 [2024-10-28 13:36:51.829845] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:37.778 13:36:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78302 00:25:37.778 [2024-10-28 13:36:51.871413] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:38.347 13:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0eFXh6pY3S 00:25:38.347 13:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:38.347 13:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:38.347 13:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:25:38.347 13:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:25:38.347 13:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:38.347 13:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:38.347 13:36:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:25:38.347 00:25:38.347 real 0m3.767s 00:25:38.347 user 0m4.929s 00:25:38.347 sys 0m0.621s 00:25:38.347 13:36:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:38.347 ************************************ 00:25:38.347 END TEST raid_write_error_test 00:25:38.347 ************************************ 00:25:38.347 13:36:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.347 13:36:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:25:38.347 13:36:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:25:38.347 13:36:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:38.347 13:36:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:38.347 13:36:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:38.347 ************************************ 00:25:38.347 START TEST raid_state_function_test 00:25:38.347 ************************************ 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:25:38.347 Process raid pid: 78434 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78434 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78434' 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78434 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78434 ']' 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:38.347 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.348 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:38.348 13:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.348 [2024-10-28 13:36:52.369700] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:25:38.348 [2024-10-28 13:36:52.369898] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.606 [2024-10-28 13:36:52.516691] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:38.607 [2024-10-28 13:36:52.542965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.607 [2024-10-28 13:36:52.604012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.607 [2024-10-28 13:36:52.683693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:38.607 [2024-10-28 13:36:52.683750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.540 [2024-10-28 13:36:53.417086] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:39.540 [2024-10-28 13:36:53.417197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:39.540 [2024-10-28 13:36:53.417218] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:39.540 [2024-10-28 13:36:53.417232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:39.540 [2024-10-28 13:36:53.417253] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:39.540 [2024-10-28 13:36:53.417265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:39.540 "name": "Existed_Raid", 00:25:39.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.540 "strip_size_kb": 64, 00:25:39.540 "state": "configuring", 00:25:39.540 "raid_level": "concat", 00:25:39.540 "superblock": false, 00:25:39.540 "num_base_bdevs": 3, 00:25:39.540 "num_base_bdevs_discovered": 0, 00:25:39.540 "num_base_bdevs_operational": 3, 00:25:39.540 "base_bdevs_list": [ 00:25:39.540 { 00:25:39.540 "name": "BaseBdev1", 00:25:39.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.540 "is_configured": false, 00:25:39.540 "data_offset": 0, 00:25:39.540 "data_size": 0 00:25:39.540 }, 00:25:39.540 { 00:25:39.540 "name": "BaseBdev2", 00:25:39.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.540 "is_configured": false, 00:25:39.540 "data_offset": 0, 00:25:39.540 "data_size": 0 00:25:39.540 }, 00:25:39.540 { 00:25:39.540 "name": "BaseBdev3", 00:25:39.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.540 "is_configured": false, 00:25:39.540 "data_offset": 0, 00:25:39.540 "data_size": 0 00:25:39.540 } 00:25:39.540 ] 00:25:39.540 }' 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:39.540 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.798 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:39.798 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.798 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.058 [2024-10-28 13:36:53.957189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:40.058 [2024-10-28 13:36:53.957400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.058 [2024-10-28 13:36:53.965196] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:40.058 [2024-10-28 13:36:53.965247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:40.058 [2024-10-28 13:36:53.965267] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:40.058 [2024-10-28 13:36:53.965280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:40.058 [2024-10-28 13:36:53.965293] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:40.058 [2024-10-28 13:36:53.965306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.058 [2024-10-28 13:36:53.989364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:40.058 BaseBdev1 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.058 13:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.058 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.058 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:40.058 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.058 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.058 [ 00:25:40.058 { 00:25:40.058 "name": "BaseBdev1", 00:25:40.058 "aliases": [ 00:25:40.058 "d2c92caf-5f18-490c-b3a3-5e125b014e6c" 00:25:40.058 ], 00:25:40.058 "product_name": "Malloc disk", 00:25:40.058 "block_size": 512, 00:25:40.058 "num_blocks": 65536, 00:25:40.058 "uuid": "d2c92caf-5f18-490c-b3a3-5e125b014e6c", 00:25:40.058 "assigned_rate_limits": { 00:25:40.058 "rw_ios_per_sec": 0, 00:25:40.058 "rw_mbytes_per_sec": 0, 00:25:40.058 "r_mbytes_per_sec": 0, 00:25:40.058 "w_mbytes_per_sec": 0 00:25:40.058 }, 00:25:40.058 "claimed": true, 00:25:40.058 "claim_type": "exclusive_write", 00:25:40.058 "zoned": false, 00:25:40.058 "supported_io_types": { 00:25:40.058 "read": true, 00:25:40.058 "write": true, 00:25:40.058 "unmap": true, 00:25:40.058 "flush": true, 00:25:40.058 "reset": true, 00:25:40.058 "nvme_admin": false, 00:25:40.058 "nvme_io": false, 00:25:40.058 "nvme_io_md": false, 00:25:40.058 "write_zeroes": true, 00:25:40.058 "zcopy": true, 00:25:40.058 "get_zone_info": false, 00:25:40.058 "zone_management": false, 00:25:40.058 "zone_append": false, 00:25:40.058 "compare": false, 00:25:40.058 "compare_and_write": false, 00:25:40.058 "abort": true, 00:25:40.058 "seek_hole": false, 00:25:40.058 "seek_data": false, 00:25:40.058 "copy": true, 00:25:40.058 "nvme_iov_md": false 00:25:40.058 }, 00:25:40.058 "memory_domains": [ 00:25:40.058 { 00:25:40.058 "dma_device_id": "system", 00:25:40.058 "dma_device_type": 1 00:25:40.058 }, 00:25:40.058 { 00:25:40.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.058 "dma_device_type": 2 00:25:40.058 } 00:25:40.058 ], 00:25:40.058 "driver_specific": {} 00:25:40.058 } 00:25:40.058 ] 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:40.059 "name": "Existed_Raid", 00:25:40.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.059 "strip_size_kb": 64, 00:25:40.059 "state": "configuring", 00:25:40.059 "raid_level": "concat", 00:25:40.059 "superblock": false, 00:25:40.059 "num_base_bdevs": 3, 00:25:40.059 "num_base_bdevs_discovered": 1, 00:25:40.059 "num_base_bdevs_operational": 3, 00:25:40.059 "base_bdevs_list": [ 00:25:40.059 { 00:25:40.059 "name": "BaseBdev1", 00:25:40.059 "uuid": "d2c92caf-5f18-490c-b3a3-5e125b014e6c", 00:25:40.059 "is_configured": true, 00:25:40.059 "data_offset": 0, 00:25:40.059 "data_size": 65536 00:25:40.059 }, 00:25:40.059 { 00:25:40.059 "name": "BaseBdev2", 00:25:40.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.059 "is_configured": false, 00:25:40.059 "data_offset": 0, 00:25:40.059 "data_size": 0 00:25:40.059 }, 00:25:40.059 { 00:25:40.059 "name": "BaseBdev3", 00:25:40.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.059 "is_configured": false, 00:25:40.059 "data_offset": 0, 00:25:40.059 "data_size": 0 00:25:40.059 } 00:25:40.059 ] 00:25:40.059 }' 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:40.059 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.626 [2024-10-28 13:36:54.553699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:40.626 [2024-10-28 13:36:54.553799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.626 [2024-10-28 13:36:54.561653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:40.626 [2024-10-28 13:36:54.564598] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:40.626 [2024-10-28 13:36:54.564774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:40.626 [2024-10-28 13:36:54.564903] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:40.626 [2024-10-28 13:36:54.564969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.626 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:40.626 "name": "Existed_Raid", 00:25:40.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.626 "strip_size_kb": 64, 00:25:40.626 "state": "configuring", 00:25:40.626 "raid_level": "concat", 00:25:40.626 "superblock": false, 00:25:40.626 "num_base_bdevs": 3, 00:25:40.626 "num_base_bdevs_discovered": 1, 00:25:40.626 "num_base_bdevs_operational": 3, 00:25:40.626 "base_bdevs_list": [ 00:25:40.626 { 00:25:40.626 "name": "BaseBdev1", 00:25:40.626 "uuid": "d2c92caf-5f18-490c-b3a3-5e125b014e6c", 00:25:40.626 "is_configured": true, 00:25:40.626 "data_offset": 0, 00:25:40.626 "data_size": 65536 00:25:40.626 }, 00:25:40.626 { 00:25:40.626 "name": "BaseBdev2", 00:25:40.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.626 "is_configured": false, 00:25:40.626 "data_offset": 0, 00:25:40.626 "data_size": 0 00:25:40.626 }, 00:25:40.626 { 00:25:40.626 "name": "BaseBdev3", 00:25:40.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.626 "is_configured": false, 00:25:40.626 "data_offset": 0, 00:25:40.626 "data_size": 0 00:25:40.627 } 00:25:40.627 ] 00:25:40.627 }' 00:25:40.627 13:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:40.627 13:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.194 [2024-10-28 13:36:55.083495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:41.194 BaseBdev2 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.194 [ 00:25:41.194 { 00:25:41.194 "name": "BaseBdev2", 00:25:41.194 "aliases": [ 00:25:41.194 "59d4f70f-920a-4f06-9747-df55b6cff9b4" 00:25:41.194 ], 00:25:41.194 "product_name": "Malloc disk", 00:25:41.194 "block_size": 512, 00:25:41.194 "num_blocks": 65536, 00:25:41.194 "uuid": "59d4f70f-920a-4f06-9747-df55b6cff9b4", 00:25:41.194 "assigned_rate_limits": { 00:25:41.194 "rw_ios_per_sec": 0, 00:25:41.194 "rw_mbytes_per_sec": 0, 00:25:41.194 "r_mbytes_per_sec": 0, 00:25:41.194 "w_mbytes_per_sec": 0 00:25:41.194 }, 00:25:41.194 "claimed": true, 00:25:41.194 "claim_type": "exclusive_write", 00:25:41.194 "zoned": false, 00:25:41.194 "supported_io_types": { 00:25:41.194 "read": true, 00:25:41.194 "write": true, 00:25:41.194 "unmap": true, 00:25:41.194 "flush": true, 00:25:41.194 "reset": true, 00:25:41.194 "nvme_admin": false, 00:25:41.194 "nvme_io": false, 00:25:41.194 "nvme_io_md": false, 00:25:41.194 "write_zeroes": true, 00:25:41.194 "zcopy": true, 00:25:41.194 "get_zone_info": false, 00:25:41.194 "zone_management": false, 00:25:41.194 "zone_append": false, 00:25:41.194 "compare": false, 00:25:41.194 "compare_and_write": false, 00:25:41.194 "abort": true, 00:25:41.194 "seek_hole": false, 00:25:41.194 "seek_data": false, 00:25:41.194 "copy": true, 00:25:41.194 "nvme_iov_md": false 00:25:41.194 }, 00:25:41.194 "memory_domains": [ 00:25:41.194 { 00:25:41.194 "dma_device_id": "system", 00:25:41.194 "dma_device_type": 1 00:25:41.194 }, 00:25:41.194 { 00:25:41.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.194 "dma_device_type": 2 00:25:41.194 } 00:25:41.194 ], 00:25:41.194 "driver_specific": {} 00:25:41.194 } 00:25:41.194 ] 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.194 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:41.194 "name": "Existed_Raid", 00:25:41.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.194 "strip_size_kb": 64, 00:25:41.194 "state": "configuring", 00:25:41.194 "raid_level": "concat", 00:25:41.194 "superblock": false, 00:25:41.194 "num_base_bdevs": 3, 00:25:41.194 "num_base_bdevs_discovered": 2, 00:25:41.194 "num_base_bdevs_operational": 3, 00:25:41.194 "base_bdevs_list": [ 00:25:41.194 { 00:25:41.194 "name": "BaseBdev1", 00:25:41.194 "uuid": "d2c92caf-5f18-490c-b3a3-5e125b014e6c", 00:25:41.194 "is_configured": true, 00:25:41.194 "data_offset": 0, 00:25:41.194 "data_size": 65536 00:25:41.194 }, 00:25:41.194 { 00:25:41.194 "name": "BaseBdev2", 00:25:41.194 "uuid": "59d4f70f-920a-4f06-9747-df55b6cff9b4", 00:25:41.194 "is_configured": true, 00:25:41.194 "data_offset": 0, 00:25:41.194 "data_size": 65536 00:25:41.194 }, 00:25:41.194 { 00:25:41.194 "name": "BaseBdev3", 00:25:41.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.195 "is_configured": false, 00:25:41.195 "data_offset": 0, 00:25:41.195 "data_size": 0 00:25:41.195 } 00:25:41.195 ] 00:25:41.195 }' 00:25:41.195 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:41.195 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.762 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:41.762 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.762 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.762 [2024-10-28 13:36:55.669914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:41.762 [2024-10-28 13:36:55.669994] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:41.762 [2024-10-28 13:36:55.670019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:41.762 [2024-10-28 13:36:55.670578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:41.762 [2024-10-28 13:36:55.670835] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:41.762 [2024-10-28 13:36:55.671096] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:25:41.762 [2024-10-28 13:36:55.671468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.762 BaseBdev3 00:25:41.762 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.762 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:41.762 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:41.762 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:41.762 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:41.762 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:41.762 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:41.762 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.763 [ 00:25:41.763 { 00:25:41.763 "name": "BaseBdev3", 00:25:41.763 "aliases": [ 00:25:41.763 "a01f89f9-3f50-46e4-a672-685bb53710fc" 00:25:41.763 ], 00:25:41.763 "product_name": "Malloc disk", 00:25:41.763 "block_size": 512, 00:25:41.763 "num_blocks": 65536, 00:25:41.763 "uuid": "a01f89f9-3f50-46e4-a672-685bb53710fc", 00:25:41.763 "assigned_rate_limits": { 00:25:41.763 "rw_ios_per_sec": 0, 00:25:41.763 "rw_mbytes_per_sec": 0, 00:25:41.763 "r_mbytes_per_sec": 0, 00:25:41.763 "w_mbytes_per_sec": 0 00:25:41.763 }, 00:25:41.763 "claimed": true, 00:25:41.763 "claim_type": "exclusive_write", 00:25:41.763 "zoned": false, 00:25:41.763 "supported_io_types": { 00:25:41.763 "read": true, 00:25:41.763 "write": true, 00:25:41.763 "unmap": true, 00:25:41.763 "flush": true, 00:25:41.763 "reset": true, 00:25:41.763 "nvme_admin": false, 00:25:41.763 "nvme_io": false, 00:25:41.763 "nvme_io_md": false, 00:25:41.763 "write_zeroes": true, 00:25:41.763 "zcopy": true, 00:25:41.763 "get_zone_info": false, 00:25:41.763 "zone_management": false, 00:25:41.763 "zone_append": false, 00:25:41.763 "compare": false, 00:25:41.763 "compare_and_write": false, 00:25:41.763 "abort": true, 00:25:41.763 "seek_hole": false, 00:25:41.763 "seek_data": false, 00:25:41.763 "copy": true, 00:25:41.763 "nvme_iov_md": false 00:25:41.763 }, 00:25:41.763 "memory_domains": [ 00:25:41.763 { 00:25:41.763 "dma_device_id": "system", 00:25:41.763 "dma_device_type": 1 00:25:41.763 }, 00:25:41.763 { 00:25:41.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.763 "dma_device_type": 2 00:25:41.763 } 00:25:41.763 ], 00:25:41.763 "driver_specific": {} 00:25:41.763 } 00:25:41.763 ] 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:41.763 "name": "Existed_Raid", 00:25:41.763 "uuid": "7b560d06-a6a8-414d-a857-6b4ea138317c", 00:25:41.763 "strip_size_kb": 64, 00:25:41.763 "state": "online", 00:25:41.763 "raid_level": "concat", 00:25:41.763 "superblock": false, 00:25:41.763 "num_base_bdevs": 3, 00:25:41.763 "num_base_bdevs_discovered": 3, 00:25:41.763 "num_base_bdevs_operational": 3, 00:25:41.763 "base_bdevs_list": [ 00:25:41.763 { 00:25:41.763 "name": "BaseBdev1", 00:25:41.763 "uuid": "d2c92caf-5f18-490c-b3a3-5e125b014e6c", 00:25:41.763 "is_configured": true, 00:25:41.763 "data_offset": 0, 00:25:41.763 "data_size": 65536 00:25:41.763 }, 00:25:41.763 { 00:25:41.763 "name": "BaseBdev2", 00:25:41.763 "uuid": "59d4f70f-920a-4f06-9747-df55b6cff9b4", 00:25:41.763 "is_configured": true, 00:25:41.763 "data_offset": 0, 00:25:41.763 "data_size": 65536 00:25:41.763 }, 00:25:41.763 { 00:25:41.763 "name": "BaseBdev3", 00:25:41.763 "uuid": "a01f89f9-3f50-46e4-a672-685bb53710fc", 00:25:41.763 "is_configured": true, 00:25:41.763 "data_offset": 0, 00:25:41.763 "data_size": 65536 00:25:41.763 } 00:25:41.763 ] 00:25:41.763 }' 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:41.763 13:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.331 [2024-10-28 13:36:56.258550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:42.331 "name": "Existed_Raid", 00:25:42.331 "aliases": [ 00:25:42.331 "7b560d06-a6a8-414d-a857-6b4ea138317c" 00:25:42.331 ], 00:25:42.331 "product_name": "Raid Volume", 00:25:42.331 "block_size": 512, 00:25:42.331 "num_blocks": 196608, 00:25:42.331 "uuid": "7b560d06-a6a8-414d-a857-6b4ea138317c", 00:25:42.331 "assigned_rate_limits": { 00:25:42.331 "rw_ios_per_sec": 0, 00:25:42.331 "rw_mbytes_per_sec": 0, 00:25:42.331 "r_mbytes_per_sec": 0, 00:25:42.331 "w_mbytes_per_sec": 0 00:25:42.331 }, 00:25:42.331 "claimed": false, 00:25:42.331 "zoned": false, 00:25:42.331 "supported_io_types": { 00:25:42.331 "read": true, 00:25:42.331 "write": true, 00:25:42.331 "unmap": true, 00:25:42.331 "flush": true, 00:25:42.331 "reset": true, 00:25:42.331 "nvme_admin": false, 00:25:42.331 "nvme_io": false, 00:25:42.331 "nvme_io_md": false, 00:25:42.331 "write_zeroes": true, 00:25:42.331 "zcopy": false, 00:25:42.331 "get_zone_info": false, 00:25:42.331 "zone_management": false, 00:25:42.331 "zone_append": false, 00:25:42.331 "compare": false, 00:25:42.331 "compare_and_write": false, 00:25:42.331 "abort": false, 00:25:42.331 "seek_hole": false, 00:25:42.331 "seek_data": false, 00:25:42.331 "copy": false, 00:25:42.331 "nvme_iov_md": false 00:25:42.331 }, 00:25:42.331 "memory_domains": [ 00:25:42.331 { 00:25:42.331 "dma_device_id": "system", 00:25:42.331 "dma_device_type": 1 00:25:42.331 }, 00:25:42.331 { 00:25:42.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.331 "dma_device_type": 2 00:25:42.331 }, 00:25:42.331 { 00:25:42.331 "dma_device_id": "system", 00:25:42.331 "dma_device_type": 1 00:25:42.331 }, 00:25:42.331 { 00:25:42.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.331 "dma_device_type": 2 00:25:42.331 }, 00:25:42.331 { 00:25:42.331 "dma_device_id": "system", 00:25:42.331 "dma_device_type": 1 00:25:42.331 }, 00:25:42.331 { 00:25:42.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.331 "dma_device_type": 2 00:25:42.331 } 00:25:42.331 ], 00:25:42.331 "driver_specific": { 00:25:42.331 "raid": { 00:25:42.331 "uuid": "7b560d06-a6a8-414d-a857-6b4ea138317c", 00:25:42.331 "strip_size_kb": 64, 00:25:42.331 "state": "online", 00:25:42.331 "raid_level": "concat", 00:25:42.331 "superblock": false, 00:25:42.331 "num_base_bdevs": 3, 00:25:42.331 "num_base_bdevs_discovered": 3, 00:25:42.331 "num_base_bdevs_operational": 3, 00:25:42.331 "base_bdevs_list": [ 00:25:42.331 { 00:25:42.331 "name": "BaseBdev1", 00:25:42.331 "uuid": "d2c92caf-5f18-490c-b3a3-5e125b014e6c", 00:25:42.331 "is_configured": true, 00:25:42.331 "data_offset": 0, 00:25:42.331 "data_size": 65536 00:25:42.331 }, 00:25:42.331 { 00:25:42.331 "name": "BaseBdev2", 00:25:42.331 "uuid": "59d4f70f-920a-4f06-9747-df55b6cff9b4", 00:25:42.331 "is_configured": true, 00:25:42.331 "data_offset": 0, 00:25:42.331 "data_size": 65536 00:25:42.331 }, 00:25:42.331 { 00:25:42.331 "name": "BaseBdev3", 00:25:42.331 "uuid": "a01f89f9-3f50-46e4-a672-685bb53710fc", 00:25:42.331 "is_configured": true, 00:25:42.331 "data_offset": 0, 00:25:42.331 "data_size": 65536 00:25:42.331 } 00:25:42.331 ] 00:25:42.331 } 00:25:42.331 } 00:25:42.331 }' 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:42.331 BaseBdev2 00:25:42.331 BaseBdev3' 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.331 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.590 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.590 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:42.590 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:42.590 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:42.590 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:42.590 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:42.590 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.590 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.590 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.590 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:42.590 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:42.590 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.591 [2024-10-28 13:36:56.586368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:42.591 [2024-10-28 13:36:56.586412] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:42.591 [2024-10-28 13:36:56.586486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:42.591 "name": "Existed_Raid", 00:25:42.591 "uuid": "7b560d06-a6a8-414d-a857-6b4ea138317c", 00:25:42.591 "strip_size_kb": 64, 00:25:42.591 "state": "offline", 00:25:42.591 "raid_level": "concat", 00:25:42.591 "superblock": false, 00:25:42.591 "num_base_bdevs": 3, 00:25:42.591 "num_base_bdevs_discovered": 2, 00:25:42.591 "num_base_bdevs_operational": 2, 00:25:42.591 "base_bdevs_list": [ 00:25:42.591 { 00:25:42.591 "name": null, 00:25:42.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.591 "is_configured": false, 00:25:42.591 "data_offset": 0, 00:25:42.591 "data_size": 65536 00:25:42.591 }, 00:25:42.591 { 00:25:42.591 "name": "BaseBdev2", 00:25:42.591 "uuid": "59d4f70f-920a-4f06-9747-df55b6cff9b4", 00:25:42.591 "is_configured": true, 00:25:42.591 "data_offset": 0, 00:25:42.591 "data_size": 65536 00:25:42.591 }, 00:25:42.591 { 00:25:42.591 "name": "BaseBdev3", 00:25:42.591 "uuid": "a01f89f9-3f50-46e4-a672-685bb53710fc", 00:25:42.591 "is_configured": true, 00:25:42.591 "data_offset": 0, 00:25:42.591 "data_size": 65536 00:25:42.591 } 00:25:42.591 ] 00:25:42.591 }' 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:42.591 13:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.171 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.172 [2024-10-28 13:36:57.177879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.172 [2024-10-28 13:36:57.253433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:43.172 [2024-10-28 13:36:57.253513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.172 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.469 BaseBdev2 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.469 [ 00:25:43.469 { 00:25:43.469 "name": "BaseBdev2", 00:25:43.469 "aliases": [ 00:25:43.469 "f862cae9-ef00-4884-ab30-258b58f9f846" 00:25:43.469 ], 00:25:43.469 "product_name": "Malloc disk", 00:25:43.469 "block_size": 512, 00:25:43.469 "num_blocks": 65536, 00:25:43.469 "uuid": "f862cae9-ef00-4884-ab30-258b58f9f846", 00:25:43.469 "assigned_rate_limits": { 00:25:43.469 "rw_ios_per_sec": 0, 00:25:43.469 "rw_mbytes_per_sec": 0, 00:25:43.469 "r_mbytes_per_sec": 0, 00:25:43.469 "w_mbytes_per_sec": 0 00:25:43.469 }, 00:25:43.469 "claimed": false, 00:25:43.469 "zoned": false, 00:25:43.469 "supported_io_types": { 00:25:43.469 "read": true, 00:25:43.469 "write": true, 00:25:43.469 "unmap": true, 00:25:43.469 "flush": true, 00:25:43.469 "reset": true, 00:25:43.469 "nvme_admin": false, 00:25:43.469 "nvme_io": false, 00:25:43.469 "nvme_io_md": false, 00:25:43.469 "write_zeroes": true, 00:25:43.469 "zcopy": true, 00:25:43.469 "get_zone_info": false, 00:25:43.469 "zone_management": false, 00:25:43.469 "zone_append": false, 00:25:43.469 "compare": false, 00:25:43.469 "compare_and_write": false, 00:25:43.469 "abort": true, 00:25:43.469 "seek_hole": false, 00:25:43.469 "seek_data": false, 00:25:43.469 "copy": true, 00:25:43.469 "nvme_iov_md": false 00:25:43.469 }, 00:25:43.469 "memory_domains": [ 00:25:43.469 { 00:25:43.469 "dma_device_id": "system", 00:25:43.469 "dma_device_type": 1 00:25:43.469 }, 00:25:43.469 { 00:25:43.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.469 "dma_device_type": 2 00:25:43.469 } 00:25:43.469 ], 00:25:43.469 "driver_specific": {} 00:25:43.469 } 00:25:43.469 ] 00:25:43.469 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.470 BaseBdev3 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.470 [ 00:25:43.470 { 00:25:43.470 "name": "BaseBdev3", 00:25:43.470 "aliases": [ 00:25:43.470 "5e7478cf-cb04-4baa-966e-25a0ef140b4e" 00:25:43.470 ], 00:25:43.470 "product_name": "Malloc disk", 00:25:43.470 "block_size": 512, 00:25:43.470 "num_blocks": 65536, 00:25:43.470 "uuid": "5e7478cf-cb04-4baa-966e-25a0ef140b4e", 00:25:43.470 "assigned_rate_limits": { 00:25:43.470 "rw_ios_per_sec": 0, 00:25:43.470 "rw_mbytes_per_sec": 0, 00:25:43.470 "r_mbytes_per_sec": 0, 00:25:43.470 "w_mbytes_per_sec": 0 00:25:43.470 }, 00:25:43.470 "claimed": false, 00:25:43.470 "zoned": false, 00:25:43.470 "supported_io_types": { 00:25:43.470 "read": true, 00:25:43.470 "write": true, 00:25:43.470 "unmap": true, 00:25:43.470 "flush": true, 00:25:43.470 "reset": true, 00:25:43.470 "nvme_admin": false, 00:25:43.470 "nvme_io": false, 00:25:43.470 "nvme_io_md": false, 00:25:43.470 "write_zeroes": true, 00:25:43.470 "zcopy": true, 00:25:43.470 "get_zone_info": false, 00:25:43.470 "zone_management": false, 00:25:43.470 "zone_append": false, 00:25:43.470 "compare": false, 00:25:43.470 "compare_and_write": false, 00:25:43.470 "abort": true, 00:25:43.470 "seek_hole": false, 00:25:43.470 "seek_data": false, 00:25:43.470 "copy": true, 00:25:43.470 "nvme_iov_md": false 00:25:43.470 }, 00:25:43.470 "memory_domains": [ 00:25:43.470 { 00:25:43.470 "dma_device_id": "system", 00:25:43.470 "dma_device_type": 1 00:25:43.470 }, 00:25:43.470 { 00:25:43.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.470 "dma_device_type": 2 00:25:43.470 } 00:25:43.470 ], 00:25:43.470 "driver_specific": {} 00:25:43.470 } 00:25:43.470 ] 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.470 [2024-10-28 13:36:57.427661] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:43.470 [2024-10-28 13:36:57.428096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:43.470 [2024-10-28 13:36:57.428294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:43.470 [2024-10-28 13:36:57.431004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:43.470 "name": "Existed_Raid", 00:25:43.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.470 "strip_size_kb": 64, 00:25:43.470 "state": "configuring", 00:25:43.470 "raid_level": "concat", 00:25:43.470 "superblock": false, 00:25:43.470 "num_base_bdevs": 3, 00:25:43.470 "num_base_bdevs_discovered": 2, 00:25:43.470 "num_base_bdevs_operational": 3, 00:25:43.470 "base_bdevs_list": [ 00:25:43.470 { 00:25:43.470 "name": "BaseBdev1", 00:25:43.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.470 "is_configured": false, 00:25:43.470 "data_offset": 0, 00:25:43.470 "data_size": 0 00:25:43.470 }, 00:25:43.470 { 00:25:43.470 "name": "BaseBdev2", 00:25:43.470 "uuid": "f862cae9-ef00-4884-ab30-258b58f9f846", 00:25:43.470 "is_configured": true, 00:25:43.470 "data_offset": 0, 00:25:43.470 "data_size": 65536 00:25:43.470 }, 00:25:43.470 { 00:25:43.470 "name": "BaseBdev3", 00:25:43.470 "uuid": "5e7478cf-cb04-4baa-966e-25a0ef140b4e", 00:25:43.470 "is_configured": true, 00:25:43.470 "data_offset": 0, 00:25:43.470 "data_size": 65536 00:25:43.470 } 00:25:43.470 ] 00:25:43.470 }' 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:43.470 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.039 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:44.039 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.039 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.039 [2024-10-28 13:36:57.955835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:44.039 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.039 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:44.039 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:44.039 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:44.039 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:44.039 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:44.039 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:44.040 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:44.040 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:44.040 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:44.040 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:44.040 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.040 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.040 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.040 13:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:44.040 13:36:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.040 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:44.040 "name": "Existed_Raid", 00:25:44.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.040 "strip_size_kb": 64, 00:25:44.040 "state": "configuring", 00:25:44.040 "raid_level": "concat", 00:25:44.040 "superblock": false, 00:25:44.040 "num_base_bdevs": 3, 00:25:44.040 "num_base_bdevs_discovered": 1, 00:25:44.040 "num_base_bdevs_operational": 3, 00:25:44.040 "base_bdevs_list": [ 00:25:44.040 { 00:25:44.040 "name": "BaseBdev1", 00:25:44.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.040 "is_configured": false, 00:25:44.040 "data_offset": 0, 00:25:44.040 "data_size": 0 00:25:44.040 }, 00:25:44.040 { 00:25:44.040 "name": null, 00:25:44.040 "uuid": "f862cae9-ef00-4884-ab30-258b58f9f846", 00:25:44.040 "is_configured": false, 00:25:44.040 "data_offset": 0, 00:25:44.040 "data_size": 65536 00:25:44.040 }, 00:25:44.040 { 00:25:44.040 "name": "BaseBdev3", 00:25:44.040 "uuid": "5e7478cf-cb04-4baa-966e-25a0ef140b4e", 00:25:44.040 "is_configured": true, 00:25:44.040 "data_offset": 0, 00:25:44.040 "data_size": 65536 00:25:44.040 } 00:25:44.040 ] 00:25:44.040 }' 00:25:44.040 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:44.040 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.606 [2024-10-28 13:36:58.564554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:44.606 BaseBdev1 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.606 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.606 [ 00:25:44.606 { 00:25:44.606 "name": "BaseBdev1", 00:25:44.606 "aliases": [ 00:25:44.607 "f2c5c407-da99-41f8-8fad-b574b60e6ed1" 00:25:44.607 ], 00:25:44.607 "product_name": "Malloc disk", 00:25:44.607 "block_size": 512, 00:25:44.607 "num_blocks": 65536, 00:25:44.607 "uuid": "f2c5c407-da99-41f8-8fad-b574b60e6ed1", 00:25:44.607 "assigned_rate_limits": { 00:25:44.607 "rw_ios_per_sec": 0, 00:25:44.607 "rw_mbytes_per_sec": 0, 00:25:44.607 "r_mbytes_per_sec": 0, 00:25:44.607 "w_mbytes_per_sec": 0 00:25:44.607 }, 00:25:44.607 "claimed": true, 00:25:44.607 "claim_type": "exclusive_write", 00:25:44.607 "zoned": false, 00:25:44.607 "supported_io_types": { 00:25:44.607 "read": true, 00:25:44.607 "write": true, 00:25:44.607 "unmap": true, 00:25:44.607 "flush": true, 00:25:44.607 "reset": true, 00:25:44.607 "nvme_admin": false, 00:25:44.607 "nvme_io": false, 00:25:44.607 "nvme_io_md": false, 00:25:44.607 "write_zeroes": true, 00:25:44.607 "zcopy": true, 00:25:44.607 "get_zone_info": false, 00:25:44.607 "zone_management": false, 00:25:44.607 "zone_append": false, 00:25:44.607 "compare": false, 00:25:44.607 "compare_and_write": false, 00:25:44.607 "abort": true, 00:25:44.607 "seek_hole": false, 00:25:44.607 "seek_data": false, 00:25:44.607 "copy": true, 00:25:44.607 "nvme_iov_md": false 00:25:44.607 }, 00:25:44.607 "memory_domains": [ 00:25:44.607 { 00:25:44.607 "dma_device_id": "system", 00:25:44.607 "dma_device_type": 1 00:25:44.607 }, 00:25:44.607 { 00:25:44.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.607 "dma_device_type": 2 00:25:44.607 } 00:25:44.607 ], 00:25:44.607 "driver_specific": {} 00:25:44.607 } 00:25:44.607 ] 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:44.607 "name": "Existed_Raid", 00:25:44.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.607 "strip_size_kb": 64, 00:25:44.607 "state": "configuring", 00:25:44.607 "raid_level": "concat", 00:25:44.607 "superblock": false, 00:25:44.607 "num_base_bdevs": 3, 00:25:44.607 "num_base_bdevs_discovered": 2, 00:25:44.607 "num_base_bdevs_operational": 3, 00:25:44.607 "base_bdevs_list": [ 00:25:44.607 { 00:25:44.607 "name": "BaseBdev1", 00:25:44.607 "uuid": "f2c5c407-da99-41f8-8fad-b574b60e6ed1", 00:25:44.607 "is_configured": true, 00:25:44.607 "data_offset": 0, 00:25:44.607 "data_size": 65536 00:25:44.607 }, 00:25:44.607 { 00:25:44.607 "name": null, 00:25:44.607 "uuid": "f862cae9-ef00-4884-ab30-258b58f9f846", 00:25:44.607 "is_configured": false, 00:25:44.607 "data_offset": 0, 00:25:44.607 "data_size": 65536 00:25:44.607 }, 00:25:44.607 { 00:25:44.607 "name": "BaseBdev3", 00:25:44.607 "uuid": "5e7478cf-cb04-4baa-966e-25a0ef140b4e", 00:25:44.607 "is_configured": true, 00:25:44.607 "data_offset": 0, 00:25:44.607 "data_size": 65536 00:25:44.607 } 00:25:44.607 ] 00:25:44.607 }' 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:44.607 13:36:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.173 [2024-10-28 13:36:59.184955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:45.173 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.174 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:45.174 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.174 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.174 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.174 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:45.174 "name": "Existed_Raid", 00:25:45.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.174 "strip_size_kb": 64, 00:25:45.174 "state": "configuring", 00:25:45.174 "raid_level": "concat", 00:25:45.174 "superblock": false, 00:25:45.174 "num_base_bdevs": 3, 00:25:45.174 "num_base_bdevs_discovered": 1, 00:25:45.174 "num_base_bdevs_operational": 3, 00:25:45.174 "base_bdevs_list": [ 00:25:45.174 { 00:25:45.174 "name": "BaseBdev1", 00:25:45.174 "uuid": "f2c5c407-da99-41f8-8fad-b574b60e6ed1", 00:25:45.174 "is_configured": true, 00:25:45.174 "data_offset": 0, 00:25:45.174 "data_size": 65536 00:25:45.174 }, 00:25:45.174 { 00:25:45.174 "name": null, 00:25:45.174 "uuid": "f862cae9-ef00-4884-ab30-258b58f9f846", 00:25:45.174 "is_configured": false, 00:25:45.174 "data_offset": 0, 00:25:45.174 "data_size": 65536 00:25:45.174 }, 00:25:45.174 { 00:25:45.174 "name": null, 00:25:45.174 "uuid": "5e7478cf-cb04-4baa-966e-25a0ef140b4e", 00:25:45.174 "is_configured": false, 00:25:45.174 "data_offset": 0, 00:25:45.174 "data_size": 65536 00:25:45.174 } 00:25:45.174 ] 00:25:45.174 }' 00:25:45.174 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:45.174 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.739 [2024-10-28 13:36:59.777230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:45.739 "name": "Existed_Raid", 00:25:45.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.739 "strip_size_kb": 64, 00:25:45.739 "state": "configuring", 00:25:45.739 "raid_level": "concat", 00:25:45.739 "superblock": false, 00:25:45.739 "num_base_bdevs": 3, 00:25:45.739 "num_base_bdevs_discovered": 2, 00:25:45.739 "num_base_bdevs_operational": 3, 00:25:45.739 "base_bdevs_list": [ 00:25:45.739 { 00:25:45.739 "name": "BaseBdev1", 00:25:45.739 "uuid": "f2c5c407-da99-41f8-8fad-b574b60e6ed1", 00:25:45.739 "is_configured": true, 00:25:45.739 "data_offset": 0, 00:25:45.739 "data_size": 65536 00:25:45.739 }, 00:25:45.739 { 00:25:45.739 "name": null, 00:25:45.739 "uuid": "f862cae9-ef00-4884-ab30-258b58f9f846", 00:25:45.739 "is_configured": false, 00:25:45.739 "data_offset": 0, 00:25:45.739 "data_size": 65536 00:25:45.739 }, 00:25:45.739 { 00:25:45.739 "name": "BaseBdev3", 00:25:45.739 "uuid": "5e7478cf-cb04-4baa-966e-25a0ef140b4e", 00:25:45.739 "is_configured": true, 00:25:45.739 "data_offset": 0, 00:25:45.739 "data_size": 65536 00:25:45.739 } 00:25:45.739 ] 00:25:45.739 }' 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:45.739 13:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.305 [2024-10-28 13:37:00.373527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.305 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:46.305 "name": "Existed_Raid", 00:25:46.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.305 "strip_size_kb": 64, 00:25:46.305 "state": "configuring", 00:25:46.305 "raid_level": "concat", 00:25:46.305 "superblock": false, 00:25:46.305 "num_base_bdevs": 3, 00:25:46.305 "num_base_bdevs_discovered": 1, 00:25:46.305 "num_base_bdevs_operational": 3, 00:25:46.305 "base_bdevs_list": [ 00:25:46.305 { 00:25:46.305 "name": null, 00:25:46.305 "uuid": "f2c5c407-da99-41f8-8fad-b574b60e6ed1", 00:25:46.305 "is_configured": false, 00:25:46.305 "data_offset": 0, 00:25:46.305 "data_size": 65536 00:25:46.305 }, 00:25:46.305 { 00:25:46.305 "name": null, 00:25:46.305 "uuid": "f862cae9-ef00-4884-ab30-258b58f9f846", 00:25:46.305 "is_configured": false, 00:25:46.305 "data_offset": 0, 00:25:46.305 "data_size": 65536 00:25:46.305 }, 00:25:46.305 { 00:25:46.305 "name": "BaseBdev3", 00:25:46.305 "uuid": "5e7478cf-cb04-4baa-966e-25a0ef140b4e", 00:25:46.305 "is_configured": true, 00:25:46.305 "data_offset": 0, 00:25:46.305 "data_size": 65536 00:25:46.305 } 00:25:46.305 ] 00:25:46.305 }' 00:25:46.306 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:46.306 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.872 [2024-10-28 13:37:00.979862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.872 13:37:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.872 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.131 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:47.131 "name": "Existed_Raid", 00:25:47.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.131 "strip_size_kb": 64, 00:25:47.131 "state": "configuring", 00:25:47.131 "raid_level": "concat", 00:25:47.131 "superblock": false, 00:25:47.131 "num_base_bdevs": 3, 00:25:47.131 "num_base_bdevs_discovered": 2, 00:25:47.131 "num_base_bdevs_operational": 3, 00:25:47.131 "base_bdevs_list": [ 00:25:47.131 { 00:25:47.131 "name": null, 00:25:47.131 "uuid": "f2c5c407-da99-41f8-8fad-b574b60e6ed1", 00:25:47.131 "is_configured": false, 00:25:47.131 "data_offset": 0, 00:25:47.131 "data_size": 65536 00:25:47.131 }, 00:25:47.131 { 00:25:47.131 "name": "BaseBdev2", 00:25:47.131 "uuid": "f862cae9-ef00-4884-ab30-258b58f9f846", 00:25:47.131 "is_configured": true, 00:25:47.131 "data_offset": 0, 00:25:47.131 "data_size": 65536 00:25:47.131 }, 00:25:47.131 { 00:25:47.131 "name": "BaseBdev3", 00:25:47.131 "uuid": "5e7478cf-cb04-4baa-966e-25a0ef140b4e", 00:25:47.131 "is_configured": true, 00:25:47.131 "data_offset": 0, 00:25:47.131 "data_size": 65536 00:25:47.131 } 00:25:47.131 ] 00:25:47.131 }' 00:25:47.131 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:47.131 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.391 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.391 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:47.391 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.391 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.391 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f2c5c407-da99-41f8-8fad-b574b60e6ed1 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.650 [2024-10-28 13:37:01.633519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:47.650 [2024-10-28 13:37:01.633595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:47.650 [2024-10-28 13:37:01.633607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:47.650 [2024-10-28 13:37:01.633962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:25:47.650 [2024-10-28 13:37:01.634123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:47.650 [2024-10-28 13:37:01.634144] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:47.650 [2024-10-28 13:37:01.634421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.650 NewBaseBdev 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.650 [ 00:25:47.650 { 00:25:47.650 "name": "NewBaseBdev", 00:25:47.650 "aliases": [ 00:25:47.650 "f2c5c407-da99-41f8-8fad-b574b60e6ed1" 00:25:47.650 ], 00:25:47.650 "product_name": "Malloc disk", 00:25:47.650 "block_size": 512, 00:25:47.650 "num_blocks": 65536, 00:25:47.650 "uuid": "f2c5c407-da99-41f8-8fad-b574b60e6ed1", 00:25:47.650 "assigned_rate_limits": { 00:25:47.650 "rw_ios_per_sec": 0, 00:25:47.650 "rw_mbytes_per_sec": 0, 00:25:47.650 "r_mbytes_per_sec": 0, 00:25:47.650 "w_mbytes_per_sec": 0 00:25:47.650 }, 00:25:47.650 "claimed": true, 00:25:47.650 "claim_type": "exclusive_write", 00:25:47.650 "zoned": false, 00:25:47.650 "supported_io_types": { 00:25:47.650 "read": true, 00:25:47.650 "write": true, 00:25:47.650 "unmap": true, 00:25:47.650 "flush": true, 00:25:47.650 "reset": true, 00:25:47.650 "nvme_admin": false, 00:25:47.650 "nvme_io": false, 00:25:47.650 "nvme_io_md": false, 00:25:47.650 "write_zeroes": true, 00:25:47.650 "zcopy": true, 00:25:47.650 "get_zone_info": false, 00:25:47.650 "zone_management": false, 00:25:47.650 "zone_append": false, 00:25:47.650 "compare": false, 00:25:47.650 "compare_and_write": false, 00:25:47.650 "abort": true, 00:25:47.650 "seek_hole": false, 00:25:47.650 "seek_data": false, 00:25:47.650 "copy": true, 00:25:47.650 "nvme_iov_md": false 00:25:47.650 }, 00:25:47.650 "memory_domains": [ 00:25:47.650 { 00:25:47.650 "dma_device_id": "system", 00:25:47.650 "dma_device_type": 1 00:25:47.650 }, 00:25:47.650 { 00:25:47.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:47.650 "dma_device_type": 2 00:25:47.650 } 00:25:47.650 ], 00:25:47.650 "driver_specific": {} 00:25:47.650 } 00:25:47.650 ] 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.650 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:47.651 "name": "Existed_Raid", 00:25:47.651 "uuid": "9f194377-55b1-4ab8-8909-91805eb31aeb", 00:25:47.651 "strip_size_kb": 64, 00:25:47.651 "state": "online", 00:25:47.651 "raid_level": "concat", 00:25:47.651 "superblock": false, 00:25:47.651 "num_base_bdevs": 3, 00:25:47.651 "num_base_bdevs_discovered": 3, 00:25:47.651 "num_base_bdevs_operational": 3, 00:25:47.651 "base_bdevs_list": [ 00:25:47.651 { 00:25:47.651 "name": "NewBaseBdev", 00:25:47.651 "uuid": "f2c5c407-da99-41f8-8fad-b574b60e6ed1", 00:25:47.651 "is_configured": true, 00:25:47.651 "data_offset": 0, 00:25:47.651 "data_size": 65536 00:25:47.651 }, 00:25:47.651 { 00:25:47.651 "name": "BaseBdev2", 00:25:47.651 "uuid": "f862cae9-ef00-4884-ab30-258b58f9f846", 00:25:47.651 "is_configured": true, 00:25:47.651 "data_offset": 0, 00:25:47.651 "data_size": 65536 00:25:47.651 }, 00:25:47.651 { 00:25:47.651 "name": "BaseBdev3", 00:25:47.651 "uuid": "5e7478cf-cb04-4baa-966e-25a0ef140b4e", 00:25:47.651 "is_configured": true, 00:25:47.651 "data_offset": 0, 00:25:47.651 "data_size": 65536 00:25:47.651 } 00:25:47.651 ] 00:25:47.651 }' 00:25:47.651 13:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:47.651 13:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.217 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:48.218 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:48.218 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:48.218 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:48.218 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:48.218 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:48.218 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:48.218 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:48.218 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.218 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.218 [2024-10-28 13:37:02.242193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:48.218 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.218 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:48.218 "name": "Existed_Raid", 00:25:48.218 "aliases": [ 00:25:48.218 "9f194377-55b1-4ab8-8909-91805eb31aeb" 00:25:48.218 ], 00:25:48.218 "product_name": "Raid Volume", 00:25:48.218 "block_size": 512, 00:25:48.218 "num_blocks": 196608, 00:25:48.218 "uuid": "9f194377-55b1-4ab8-8909-91805eb31aeb", 00:25:48.218 "assigned_rate_limits": { 00:25:48.218 "rw_ios_per_sec": 0, 00:25:48.218 "rw_mbytes_per_sec": 0, 00:25:48.218 "r_mbytes_per_sec": 0, 00:25:48.218 "w_mbytes_per_sec": 0 00:25:48.218 }, 00:25:48.218 "claimed": false, 00:25:48.218 "zoned": false, 00:25:48.218 "supported_io_types": { 00:25:48.218 "read": true, 00:25:48.218 "write": true, 00:25:48.218 "unmap": true, 00:25:48.218 "flush": true, 00:25:48.218 "reset": true, 00:25:48.218 "nvme_admin": false, 00:25:48.218 "nvme_io": false, 00:25:48.218 "nvme_io_md": false, 00:25:48.218 "write_zeroes": true, 00:25:48.218 "zcopy": false, 00:25:48.218 "get_zone_info": false, 00:25:48.218 "zone_management": false, 00:25:48.218 "zone_append": false, 00:25:48.218 "compare": false, 00:25:48.218 "compare_and_write": false, 00:25:48.218 "abort": false, 00:25:48.218 "seek_hole": false, 00:25:48.218 "seek_data": false, 00:25:48.218 "copy": false, 00:25:48.218 "nvme_iov_md": false 00:25:48.218 }, 00:25:48.218 "memory_domains": [ 00:25:48.218 { 00:25:48.218 "dma_device_id": "system", 00:25:48.218 "dma_device_type": 1 00:25:48.218 }, 00:25:48.218 { 00:25:48.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.218 "dma_device_type": 2 00:25:48.218 }, 00:25:48.218 { 00:25:48.218 "dma_device_id": "system", 00:25:48.218 "dma_device_type": 1 00:25:48.218 }, 00:25:48.218 { 00:25:48.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.218 "dma_device_type": 2 00:25:48.218 }, 00:25:48.218 { 00:25:48.218 "dma_device_id": "system", 00:25:48.218 "dma_device_type": 1 00:25:48.218 }, 00:25:48.218 { 00:25:48.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.218 "dma_device_type": 2 00:25:48.218 } 00:25:48.218 ], 00:25:48.218 "driver_specific": { 00:25:48.218 "raid": { 00:25:48.218 "uuid": "9f194377-55b1-4ab8-8909-91805eb31aeb", 00:25:48.218 "strip_size_kb": 64, 00:25:48.218 "state": "online", 00:25:48.218 "raid_level": "concat", 00:25:48.218 "superblock": false, 00:25:48.218 "num_base_bdevs": 3, 00:25:48.218 "num_base_bdevs_discovered": 3, 00:25:48.218 "num_base_bdevs_operational": 3, 00:25:48.218 "base_bdevs_list": [ 00:25:48.218 { 00:25:48.218 "name": "NewBaseBdev", 00:25:48.218 "uuid": "f2c5c407-da99-41f8-8fad-b574b60e6ed1", 00:25:48.218 "is_configured": true, 00:25:48.218 "data_offset": 0, 00:25:48.218 "data_size": 65536 00:25:48.218 }, 00:25:48.218 { 00:25:48.218 "name": "BaseBdev2", 00:25:48.218 "uuid": "f862cae9-ef00-4884-ab30-258b58f9f846", 00:25:48.218 "is_configured": true, 00:25:48.218 "data_offset": 0, 00:25:48.218 "data_size": 65536 00:25:48.218 }, 00:25:48.218 { 00:25:48.218 "name": "BaseBdev3", 00:25:48.218 "uuid": "5e7478cf-cb04-4baa-966e-25a0ef140b4e", 00:25:48.218 "is_configured": true, 00:25:48.218 "data_offset": 0, 00:25:48.218 "data_size": 65536 00:25:48.218 } 00:25:48.218 ] 00:25:48.218 } 00:25:48.218 } 00:25:48.218 }' 00:25:48.218 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:48.218 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:48.218 BaseBdev2 00:25:48.218 BaseBdev3' 00:25:48.218 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.477 [2024-10-28 13:37:02.565830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:48.477 [2024-10-28 13:37:02.565889] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:48.477 [2024-10-28 13:37:02.566007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:48.477 [2024-10-28 13:37:02.566097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:48.477 [2024-10-28 13:37:02.566115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78434 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78434 ']' 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78434 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78434 00:25:48.477 killing process with pid 78434 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78434' 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78434 00:25:48.477 [2024-10-28 13:37:02.606281] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:48.477 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78434 00:25:48.736 [2024-10-28 13:37:02.661746] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:49.114 13:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:25:49.114 00:25:49.114 real 0m10.715s 00:25:49.114 user 0m18.725s 00:25:49.114 sys 0m1.716s 00:25:49.114 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:49.114 ************************************ 00:25:49.114 13:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.114 END TEST raid_state_function_test 00:25:49.114 ************************************ 00:25:49.114 13:37:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:25:49.114 13:37:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:49.114 13:37:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:49.114 13:37:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:49.114 ************************************ 00:25:49.114 START TEST raid_state_function_test_sb 00:25:49.114 ************************************ 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79061 00:25:49.114 Process raid pid: 79061 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79061' 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79061 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79061 ']' 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:49.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:49.114 13:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.114 [2024-10-28 13:37:03.135871] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:25:49.114 [2024-10-28 13:37:03.136065] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.372 [2024-10-28 13:37:03.284493] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:49.373 [2024-10-28 13:37:03.313670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.373 [2024-10-28 13:37:03.382662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.373 [2024-10-28 13:37:03.459009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:49.373 [2024-10-28 13:37:03.459082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.309 [2024-10-28 13:37:04.167102] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:50.309 [2024-10-28 13:37:04.167212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:50.309 [2024-10-28 13:37:04.167234] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:50.309 [2024-10-28 13:37:04.167248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:50.309 [2024-10-28 13:37:04.167268] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:50.309 [2024-10-28 13:37:04.167281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.309 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:50.309 "name": "Existed_Raid", 00:25:50.309 "uuid": "513bea85-9217-4532-923f-a379987ea7b2", 00:25:50.309 "strip_size_kb": 64, 00:25:50.309 "state": "configuring", 00:25:50.309 "raid_level": "concat", 00:25:50.309 "superblock": true, 00:25:50.309 "num_base_bdevs": 3, 00:25:50.309 "num_base_bdevs_discovered": 0, 00:25:50.310 "num_base_bdevs_operational": 3, 00:25:50.310 "base_bdevs_list": [ 00:25:50.310 { 00:25:50.310 "name": "BaseBdev1", 00:25:50.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.310 "is_configured": false, 00:25:50.310 "data_offset": 0, 00:25:50.310 "data_size": 0 00:25:50.310 }, 00:25:50.310 { 00:25:50.310 "name": "BaseBdev2", 00:25:50.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.310 "is_configured": false, 00:25:50.310 "data_offset": 0, 00:25:50.310 "data_size": 0 00:25:50.310 }, 00:25:50.310 { 00:25:50.310 "name": "BaseBdev3", 00:25:50.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.310 "is_configured": false, 00:25:50.310 "data_offset": 0, 00:25:50.310 "data_size": 0 00:25:50.310 } 00:25:50.310 ] 00:25:50.310 }' 00:25:50.310 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:50.310 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.568 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:50.568 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.568 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.568 [2024-10-28 13:37:04.715197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:50.568 [2024-10-28 13:37:04.715277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:25:50.568 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.568 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:50.568 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.568 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.568 [2024-10-28 13:37:04.723141] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:50.568 [2024-10-28 13:37:04.723246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:50.568 [2024-10-28 13:37:04.723267] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:50.568 [2024-10-28 13:37:04.723280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:50.568 [2024-10-28 13:37:04.723292] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:50.568 [2024-10-28 13:37:04.723304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.827 [2024-10-28 13:37:04.747749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:50.827 BaseBdev1 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.827 [ 00:25:50.827 { 00:25:50.827 "name": "BaseBdev1", 00:25:50.827 "aliases": [ 00:25:50.827 "c5f52727-8cfe-4fc7-9b4c-1f2cc2d0570f" 00:25:50.827 ], 00:25:50.827 "product_name": "Malloc disk", 00:25:50.827 "block_size": 512, 00:25:50.827 "num_blocks": 65536, 00:25:50.827 "uuid": "c5f52727-8cfe-4fc7-9b4c-1f2cc2d0570f", 00:25:50.827 "assigned_rate_limits": { 00:25:50.827 "rw_ios_per_sec": 0, 00:25:50.827 "rw_mbytes_per_sec": 0, 00:25:50.827 "r_mbytes_per_sec": 0, 00:25:50.827 "w_mbytes_per_sec": 0 00:25:50.827 }, 00:25:50.827 "claimed": true, 00:25:50.827 "claim_type": "exclusive_write", 00:25:50.827 "zoned": false, 00:25:50.827 "supported_io_types": { 00:25:50.827 "read": true, 00:25:50.827 "write": true, 00:25:50.827 "unmap": true, 00:25:50.827 "flush": true, 00:25:50.827 "reset": true, 00:25:50.827 "nvme_admin": false, 00:25:50.827 "nvme_io": false, 00:25:50.827 "nvme_io_md": false, 00:25:50.827 "write_zeroes": true, 00:25:50.827 "zcopy": true, 00:25:50.827 "get_zone_info": false, 00:25:50.827 "zone_management": false, 00:25:50.827 "zone_append": false, 00:25:50.827 "compare": false, 00:25:50.827 "compare_and_write": false, 00:25:50.827 "abort": true, 00:25:50.827 "seek_hole": false, 00:25:50.827 "seek_data": false, 00:25:50.827 "copy": true, 00:25:50.827 "nvme_iov_md": false 00:25:50.827 }, 00:25:50.827 "memory_domains": [ 00:25:50.827 { 00:25:50.827 "dma_device_id": "system", 00:25:50.827 "dma_device_type": 1 00:25:50.827 }, 00:25:50.827 { 00:25:50.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.827 "dma_device_type": 2 00:25:50.827 } 00:25:50.827 ], 00:25:50.827 "driver_specific": {} 00:25:50.827 } 00:25:50.827 ] 00:25:50.827 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:50.828 "name": "Existed_Raid", 00:25:50.828 "uuid": "20e2abe4-9600-44e1-b7d0-a6c4cb2ec615", 00:25:50.828 "strip_size_kb": 64, 00:25:50.828 "state": "configuring", 00:25:50.828 "raid_level": "concat", 00:25:50.828 "superblock": true, 00:25:50.828 "num_base_bdevs": 3, 00:25:50.828 "num_base_bdevs_discovered": 1, 00:25:50.828 "num_base_bdevs_operational": 3, 00:25:50.828 "base_bdevs_list": [ 00:25:50.828 { 00:25:50.828 "name": "BaseBdev1", 00:25:50.828 "uuid": "c5f52727-8cfe-4fc7-9b4c-1f2cc2d0570f", 00:25:50.828 "is_configured": true, 00:25:50.828 "data_offset": 2048, 00:25:50.828 "data_size": 63488 00:25:50.828 }, 00:25:50.828 { 00:25:50.828 "name": "BaseBdev2", 00:25:50.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.828 "is_configured": false, 00:25:50.828 "data_offset": 0, 00:25:50.828 "data_size": 0 00:25:50.828 }, 00:25:50.828 { 00:25:50.828 "name": "BaseBdev3", 00:25:50.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.828 "is_configured": false, 00:25:50.828 "data_offset": 0, 00:25:50.828 "data_size": 0 00:25:50.828 } 00:25:50.828 ] 00:25:50.828 }' 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:50.828 13:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.395 [2024-10-28 13:37:05.271987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:51.395 [2024-10-28 13:37:05.272102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.395 [2024-10-28 13:37:05.280027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:51.395 [2024-10-28 13:37:05.283190] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:51.395 [2024-10-28 13:37:05.283252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:51.395 [2024-10-28 13:37:05.283279] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:51.395 [2024-10-28 13:37:05.283299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.395 "name": "Existed_Raid", 00:25:51.395 "uuid": "c8ae19bc-a3e3-48ca-9163-b23b15174896", 00:25:51.395 "strip_size_kb": 64, 00:25:51.395 "state": "configuring", 00:25:51.395 "raid_level": "concat", 00:25:51.395 "superblock": true, 00:25:51.395 "num_base_bdevs": 3, 00:25:51.395 "num_base_bdevs_discovered": 1, 00:25:51.395 "num_base_bdevs_operational": 3, 00:25:51.395 "base_bdevs_list": [ 00:25:51.395 { 00:25:51.395 "name": "BaseBdev1", 00:25:51.395 "uuid": "c5f52727-8cfe-4fc7-9b4c-1f2cc2d0570f", 00:25:51.395 "is_configured": true, 00:25:51.395 "data_offset": 2048, 00:25:51.395 "data_size": 63488 00:25:51.395 }, 00:25:51.395 { 00:25:51.395 "name": "BaseBdev2", 00:25:51.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.395 "is_configured": false, 00:25:51.395 "data_offset": 0, 00:25:51.395 "data_size": 0 00:25:51.395 }, 00:25:51.395 { 00:25:51.395 "name": "BaseBdev3", 00:25:51.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.395 "is_configured": false, 00:25:51.395 "data_offset": 0, 00:25:51.395 "data_size": 0 00:25:51.395 } 00:25:51.395 ] 00:25:51.395 }' 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.395 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.653 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:51.653 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.653 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.913 [2024-10-28 13:37:05.813957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:51.913 BaseBdev2 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.913 [ 00:25:51.913 { 00:25:51.913 "name": "BaseBdev2", 00:25:51.913 "aliases": [ 00:25:51.913 "30768bd1-0f13-4274-b5fa-5d9fb3d48b99" 00:25:51.913 ], 00:25:51.913 "product_name": "Malloc disk", 00:25:51.913 "block_size": 512, 00:25:51.913 "num_blocks": 65536, 00:25:51.913 "uuid": "30768bd1-0f13-4274-b5fa-5d9fb3d48b99", 00:25:51.913 "assigned_rate_limits": { 00:25:51.913 "rw_ios_per_sec": 0, 00:25:51.913 "rw_mbytes_per_sec": 0, 00:25:51.913 "r_mbytes_per_sec": 0, 00:25:51.913 "w_mbytes_per_sec": 0 00:25:51.913 }, 00:25:51.913 "claimed": true, 00:25:51.913 "claim_type": "exclusive_write", 00:25:51.913 "zoned": false, 00:25:51.913 "supported_io_types": { 00:25:51.913 "read": true, 00:25:51.913 "write": true, 00:25:51.913 "unmap": true, 00:25:51.913 "flush": true, 00:25:51.913 "reset": true, 00:25:51.913 "nvme_admin": false, 00:25:51.913 "nvme_io": false, 00:25:51.913 "nvme_io_md": false, 00:25:51.913 "write_zeroes": true, 00:25:51.913 "zcopy": true, 00:25:51.913 "get_zone_info": false, 00:25:51.913 "zone_management": false, 00:25:51.913 "zone_append": false, 00:25:51.913 "compare": false, 00:25:51.913 "compare_and_write": false, 00:25:51.913 "abort": true, 00:25:51.913 "seek_hole": false, 00:25:51.913 "seek_data": false, 00:25:51.913 "copy": true, 00:25:51.913 "nvme_iov_md": false 00:25:51.913 }, 00:25:51.913 "memory_domains": [ 00:25:51.913 { 00:25:51.913 "dma_device_id": "system", 00:25:51.913 "dma_device_type": 1 00:25:51.913 }, 00:25:51.913 { 00:25:51.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.913 "dma_device_type": 2 00:25:51.913 } 00:25:51.913 ], 00:25:51.913 "driver_specific": {} 00:25:51.913 } 00:25:51.913 ] 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.913 "name": "Existed_Raid", 00:25:51.913 "uuid": "c8ae19bc-a3e3-48ca-9163-b23b15174896", 00:25:51.913 "strip_size_kb": 64, 00:25:51.913 "state": "configuring", 00:25:51.913 "raid_level": "concat", 00:25:51.913 "superblock": true, 00:25:51.913 "num_base_bdevs": 3, 00:25:51.913 "num_base_bdevs_discovered": 2, 00:25:51.913 "num_base_bdevs_operational": 3, 00:25:51.913 "base_bdevs_list": [ 00:25:51.913 { 00:25:51.913 "name": "BaseBdev1", 00:25:51.913 "uuid": "c5f52727-8cfe-4fc7-9b4c-1f2cc2d0570f", 00:25:51.913 "is_configured": true, 00:25:51.913 "data_offset": 2048, 00:25:51.913 "data_size": 63488 00:25:51.913 }, 00:25:51.913 { 00:25:51.913 "name": "BaseBdev2", 00:25:51.913 "uuid": "30768bd1-0f13-4274-b5fa-5d9fb3d48b99", 00:25:51.913 "is_configured": true, 00:25:51.913 "data_offset": 2048, 00:25:51.913 "data_size": 63488 00:25:51.913 }, 00:25:51.913 { 00:25:51.913 "name": "BaseBdev3", 00:25:51.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.913 "is_configured": false, 00:25:51.913 "data_offset": 0, 00:25:51.913 "data_size": 0 00:25:51.913 } 00:25:51.913 ] 00:25:51.913 }' 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.913 13:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.481 [2024-10-28 13:37:06.362395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:52.481 [2024-10-28 13:37:06.362739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:52.481 [2024-10-28 13:37:06.362767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:52.481 BaseBdev3 00:25:52.481 [2024-10-28 13:37:06.363300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:52.481 [2024-10-28 13:37:06.363553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:52.481 [2024-10-28 13:37:06.363615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.481 [2024-10-28 13:37:06.363869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.481 [ 00:25:52.481 { 00:25:52.481 "name": "BaseBdev3", 00:25:52.481 "aliases": [ 00:25:52.481 "72ec8573-3f40-438a-9aec-3b99f0701bcf" 00:25:52.481 ], 00:25:52.481 "product_name": "Malloc disk", 00:25:52.481 "block_size": 512, 00:25:52.481 "num_blocks": 65536, 00:25:52.481 "uuid": "72ec8573-3f40-438a-9aec-3b99f0701bcf", 00:25:52.481 "assigned_rate_limits": { 00:25:52.481 "rw_ios_per_sec": 0, 00:25:52.481 "rw_mbytes_per_sec": 0, 00:25:52.481 "r_mbytes_per_sec": 0, 00:25:52.481 "w_mbytes_per_sec": 0 00:25:52.481 }, 00:25:52.481 "claimed": true, 00:25:52.481 "claim_type": "exclusive_write", 00:25:52.481 "zoned": false, 00:25:52.481 "supported_io_types": { 00:25:52.481 "read": true, 00:25:52.481 "write": true, 00:25:52.481 "unmap": true, 00:25:52.481 "flush": true, 00:25:52.481 "reset": true, 00:25:52.481 "nvme_admin": false, 00:25:52.481 "nvme_io": false, 00:25:52.481 "nvme_io_md": false, 00:25:52.481 "write_zeroes": true, 00:25:52.481 "zcopy": true, 00:25:52.481 "get_zone_info": false, 00:25:52.481 "zone_management": false, 00:25:52.481 "zone_append": false, 00:25:52.481 "compare": false, 00:25:52.481 "compare_and_write": false, 00:25:52.481 "abort": true, 00:25:52.481 "seek_hole": false, 00:25:52.481 "seek_data": false, 00:25:52.481 "copy": true, 00:25:52.481 "nvme_iov_md": false 00:25:52.481 }, 00:25:52.481 "memory_domains": [ 00:25:52.481 { 00:25:52.481 "dma_device_id": "system", 00:25:52.481 "dma_device_type": 1 00:25:52.481 }, 00:25:52.481 { 00:25:52.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.481 "dma_device_type": 2 00:25:52.481 } 00:25:52.481 ], 00:25:52.481 "driver_specific": {} 00:25:52.481 } 00:25:52.481 ] 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.481 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:52.481 "name": "Existed_Raid", 00:25:52.481 "uuid": "c8ae19bc-a3e3-48ca-9163-b23b15174896", 00:25:52.481 "strip_size_kb": 64, 00:25:52.481 "state": "online", 00:25:52.481 "raid_level": "concat", 00:25:52.481 "superblock": true, 00:25:52.481 "num_base_bdevs": 3, 00:25:52.481 "num_base_bdevs_discovered": 3, 00:25:52.481 "num_base_bdevs_operational": 3, 00:25:52.481 "base_bdevs_list": [ 00:25:52.481 { 00:25:52.481 "name": "BaseBdev1", 00:25:52.481 "uuid": "c5f52727-8cfe-4fc7-9b4c-1f2cc2d0570f", 00:25:52.481 "is_configured": true, 00:25:52.481 "data_offset": 2048, 00:25:52.481 "data_size": 63488 00:25:52.481 }, 00:25:52.481 { 00:25:52.481 "name": "BaseBdev2", 00:25:52.481 "uuid": "30768bd1-0f13-4274-b5fa-5d9fb3d48b99", 00:25:52.482 "is_configured": true, 00:25:52.482 "data_offset": 2048, 00:25:52.482 "data_size": 63488 00:25:52.482 }, 00:25:52.482 { 00:25:52.482 "name": "BaseBdev3", 00:25:52.482 "uuid": "72ec8573-3f40-438a-9aec-3b99f0701bcf", 00:25:52.482 "is_configured": true, 00:25:52.482 "data_offset": 2048, 00:25:52.482 "data_size": 63488 00:25:52.482 } 00:25:52.482 ] 00:25:52.482 }' 00:25:52.482 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:52.482 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.741 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:52.741 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:52.741 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:52.741 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:52.741 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:52.741 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:52.741 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:52.741 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.741 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.741 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:53.000 [2024-10-28 13:37:06.898989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:53.000 13:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.000 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:53.000 "name": "Existed_Raid", 00:25:53.000 "aliases": [ 00:25:53.000 "c8ae19bc-a3e3-48ca-9163-b23b15174896" 00:25:53.000 ], 00:25:53.000 "product_name": "Raid Volume", 00:25:53.000 "block_size": 512, 00:25:53.000 "num_blocks": 190464, 00:25:53.000 "uuid": "c8ae19bc-a3e3-48ca-9163-b23b15174896", 00:25:53.000 "assigned_rate_limits": { 00:25:53.000 "rw_ios_per_sec": 0, 00:25:53.000 "rw_mbytes_per_sec": 0, 00:25:53.000 "r_mbytes_per_sec": 0, 00:25:53.000 "w_mbytes_per_sec": 0 00:25:53.000 }, 00:25:53.000 "claimed": false, 00:25:53.000 "zoned": false, 00:25:53.000 "supported_io_types": { 00:25:53.000 "read": true, 00:25:53.000 "write": true, 00:25:53.000 "unmap": true, 00:25:53.000 "flush": true, 00:25:53.000 "reset": true, 00:25:53.000 "nvme_admin": false, 00:25:53.000 "nvme_io": false, 00:25:53.000 "nvme_io_md": false, 00:25:53.000 "write_zeroes": true, 00:25:53.000 "zcopy": false, 00:25:53.000 "get_zone_info": false, 00:25:53.000 "zone_management": false, 00:25:53.000 "zone_append": false, 00:25:53.000 "compare": false, 00:25:53.000 "compare_and_write": false, 00:25:53.000 "abort": false, 00:25:53.000 "seek_hole": false, 00:25:53.000 "seek_data": false, 00:25:53.000 "copy": false, 00:25:53.000 "nvme_iov_md": false 00:25:53.000 }, 00:25:53.000 "memory_domains": [ 00:25:53.000 { 00:25:53.000 "dma_device_id": "system", 00:25:53.000 "dma_device_type": 1 00:25:53.000 }, 00:25:53.000 { 00:25:53.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.000 "dma_device_type": 2 00:25:53.000 }, 00:25:53.000 { 00:25:53.000 "dma_device_id": "system", 00:25:53.000 "dma_device_type": 1 00:25:53.000 }, 00:25:53.000 { 00:25:53.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.000 "dma_device_type": 2 00:25:53.000 }, 00:25:53.000 { 00:25:53.000 "dma_device_id": "system", 00:25:53.000 "dma_device_type": 1 00:25:53.000 }, 00:25:53.000 { 00:25:53.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.000 "dma_device_type": 2 00:25:53.000 } 00:25:53.000 ], 00:25:53.000 "driver_specific": { 00:25:53.000 "raid": { 00:25:53.000 "uuid": "c8ae19bc-a3e3-48ca-9163-b23b15174896", 00:25:53.000 "strip_size_kb": 64, 00:25:53.000 "state": "online", 00:25:53.000 "raid_level": "concat", 00:25:53.000 "superblock": true, 00:25:53.000 "num_base_bdevs": 3, 00:25:53.000 "num_base_bdevs_discovered": 3, 00:25:53.000 "num_base_bdevs_operational": 3, 00:25:53.000 "base_bdevs_list": [ 00:25:53.000 { 00:25:53.000 "name": "BaseBdev1", 00:25:53.000 "uuid": "c5f52727-8cfe-4fc7-9b4c-1f2cc2d0570f", 00:25:53.000 "is_configured": true, 00:25:53.000 "data_offset": 2048, 00:25:53.000 "data_size": 63488 00:25:53.000 }, 00:25:53.000 { 00:25:53.000 "name": "BaseBdev2", 00:25:53.000 "uuid": "30768bd1-0f13-4274-b5fa-5d9fb3d48b99", 00:25:53.000 "is_configured": true, 00:25:53.000 "data_offset": 2048, 00:25:53.000 "data_size": 63488 00:25:53.000 }, 00:25:53.000 { 00:25:53.000 "name": "BaseBdev3", 00:25:53.000 "uuid": "72ec8573-3f40-438a-9aec-3b99f0701bcf", 00:25:53.000 "is_configured": true, 00:25:53.000 "data_offset": 2048, 00:25:53.000 "data_size": 63488 00:25:53.000 } 00:25:53.000 ] 00:25:53.000 } 00:25:53.000 } 00:25:53.000 }' 00:25:53.000 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:53.000 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:53.000 BaseBdev2 00:25:53.000 BaseBdev3' 00:25:53.000 13:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:53.000 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:53.000 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:53.000 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:53.001 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.001 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.001 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:53.001 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.001 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:53.001 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:53.001 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:53.001 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:53.001 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:53.001 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.001 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.001 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.260 [2024-10-28 13:37:07.218704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:53.260 [2024-10-28 13:37:07.218754] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:53.260 [2024-10-28 13:37:07.218834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:53.260 "name": "Existed_Raid", 00:25:53.260 "uuid": "c8ae19bc-a3e3-48ca-9163-b23b15174896", 00:25:53.260 "strip_size_kb": 64, 00:25:53.260 "state": "offline", 00:25:53.260 "raid_level": "concat", 00:25:53.260 "superblock": true, 00:25:53.260 "num_base_bdevs": 3, 00:25:53.260 "num_base_bdevs_discovered": 2, 00:25:53.260 "num_base_bdevs_operational": 2, 00:25:53.260 "base_bdevs_list": [ 00:25:53.260 { 00:25:53.260 "name": null, 00:25:53.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.260 "is_configured": false, 00:25:53.260 "data_offset": 0, 00:25:53.260 "data_size": 63488 00:25:53.260 }, 00:25:53.260 { 00:25:53.260 "name": "BaseBdev2", 00:25:53.260 "uuid": "30768bd1-0f13-4274-b5fa-5d9fb3d48b99", 00:25:53.260 "is_configured": true, 00:25:53.260 "data_offset": 2048, 00:25:53.260 "data_size": 63488 00:25:53.260 }, 00:25:53.260 { 00:25:53.260 "name": "BaseBdev3", 00:25:53.260 "uuid": "72ec8573-3f40-438a-9aec-3b99f0701bcf", 00:25:53.260 "is_configured": true, 00:25:53.260 "data_offset": 2048, 00:25:53.260 "data_size": 63488 00:25:53.260 } 00:25:53.260 ] 00:25:53.260 }' 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:53.260 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.828 [2024-10-28 13:37:07.802047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.828 [2024-10-28 13:37:07.869867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:53.828 [2024-10-28 13:37:07.869971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.828 BaseBdev2 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.828 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.828 [ 00:25:53.828 { 00:25:53.828 "name": "BaseBdev2", 00:25:53.828 "aliases": [ 00:25:53.828 "afb5a788-fb05-442a-b31c-55b03482f4b1" 00:25:53.828 ], 00:25:53.828 "product_name": "Malloc disk", 00:25:53.828 "block_size": 512, 00:25:53.828 "num_blocks": 65536, 00:25:53.828 "uuid": "afb5a788-fb05-442a-b31c-55b03482f4b1", 00:25:53.828 "assigned_rate_limits": { 00:25:53.828 "rw_ios_per_sec": 0, 00:25:53.828 "rw_mbytes_per_sec": 0, 00:25:53.828 "r_mbytes_per_sec": 0, 00:25:53.828 "w_mbytes_per_sec": 0 00:25:53.828 }, 00:25:53.828 "claimed": false, 00:25:53.828 "zoned": false, 00:25:53.828 "supported_io_types": { 00:25:53.828 "read": true, 00:25:53.828 "write": true, 00:25:53.828 "unmap": true, 00:25:53.828 "flush": true, 00:25:53.828 "reset": true, 00:25:53.828 "nvme_admin": false, 00:25:53.828 "nvme_io": false, 00:25:53.828 "nvme_io_md": false, 00:25:53.828 "write_zeroes": true, 00:25:53.828 "zcopy": true, 00:25:53.828 "get_zone_info": false, 00:25:53.828 "zone_management": false, 00:25:53.828 "zone_append": false, 00:25:53.828 "compare": false, 00:25:53.828 "compare_and_write": false, 00:25:53.828 "abort": true, 00:25:53.828 "seek_hole": false, 00:25:53.828 "seek_data": false, 00:25:53.828 "copy": true, 00:25:53.828 "nvme_iov_md": false 00:25:53.828 }, 00:25:53.828 "memory_domains": [ 00:25:53.828 { 00:25:53.828 "dma_device_id": "system", 00:25:53.828 "dma_device_type": 1 00:25:53.828 }, 00:25:53.828 { 00:25:53.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.829 "dma_device_type": 2 00:25:53.829 } 00:25:53.829 ], 00:25:53.829 "driver_specific": {} 00:25:53.829 } 00:25:53.829 ] 00:25:53.829 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.829 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:53.829 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:53.829 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:53.829 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:53.829 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.829 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.087 BaseBdev3 00:25:54.087 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.087 13:37:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:54.087 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:54.087 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:54.087 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:54.087 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:54.087 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:54.087 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:54.087 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.087 13:37:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.087 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.087 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:54.087 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.087 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.087 [ 00:25:54.087 { 00:25:54.087 "name": "BaseBdev3", 00:25:54.087 "aliases": [ 00:25:54.087 "1534ea08-5043-4dc0-b376-d6a0647ddf46" 00:25:54.087 ], 00:25:54.087 "product_name": "Malloc disk", 00:25:54.087 "block_size": 512, 00:25:54.087 "num_blocks": 65536, 00:25:54.087 "uuid": "1534ea08-5043-4dc0-b376-d6a0647ddf46", 00:25:54.087 "assigned_rate_limits": { 00:25:54.087 "rw_ios_per_sec": 0, 00:25:54.087 "rw_mbytes_per_sec": 0, 00:25:54.087 "r_mbytes_per_sec": 0, 00:25:54.087 "w_mbytes_per_sec": 0 00:25:54.087 }, 00:25:54.087 "claimed": false, 00:25:54.087 "zoned": false, 00:25:54.087 "supported_io_types": { 00:25:54.087 "read": true, 00:25:54.088 "write": true, 00:25:54.088 "unmap": true, 00:25:54.088 "flush": true, 00:25:54.088 "reset": true, 00:25:54.088 "nvme_admin": false, 00:25:54.088 "nvme_io": false, 00:25:54.088 "nvme_io_md": false, 00:25:54.088 "write_zeroes": true, 00:25:54.088 "zcopy": true, 00:25:54.088 "get_zone_info": false, 00:25:54.088 "zone_management": false, 00:25:54.088 "zone_append": false, 00:25:54.088 "compare": false, 00:25:54.088 "compare_and_write": false, 00:25:54.088 "abort": true, 00:25:54.088 "seek_hole": false, 00:25:54.088 "seek_data": false, 00:25:54.088 "copy": true, 00:25:54.088 "nvme_iov_md": false 00:25:54.088 }, 00:25:54.088 "memory_domains": [ 00:25:54.088 { 00:25:54.088 "dma_device_id": "system", 00:25:54.088 "dma_device_type": 1 00:25:54.088 }, 00:25:54.088 { 00:25:54.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.088 "dma_device_type": 2 00:25:54.088 } 00:25:54.088 ], 00:25:54.088 "driver_specific": {} 00:25:54.088 } 00:25:54.088 ] 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.088 [2024-10-28 13:37:08.030495] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:54.088 [2024-10-28 13:37:08.030558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:54.088 [2024-10-28 13:37:08.030591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:54.088 [2024-10-28 13:37:08.033264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:54.088 "name": "Existed_Raid", 00:25:54.088 "uuid": "8678929b-0b18-400d-b588-f485aad82e90", 00:25:54.088 "strip_size_kb": 64, 00:25:54.088 "state": "configuring", 00:25:54.088 "raid_level": "concat", 00:25:54.088 "superblock": true, 00:25:54.088 "num_base_bdevs": 3, 00:25:54.088 "num_base_bdevs_discovered": 2, 00:25:54.088 "num_base_bdevs_operational": 3, 00:25:54.088 "base_bdevs_list": [ 00:25:54.088 { 00:25:54.088 "name": "BaseBdev1", 00:25:54.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.088 "is_configured": false, 00:25:54.088 "data_offset": 0, 00:25:54.088 "data_size": 0 00:25:54.088 }, 00:25:54.088 { 00:25:54.088 "name": "BaseBdev2", 00:25:54.088 "uuid": "afb5a788-fb05-442a-b31c-55b03482f4b1", 00:25:54.088 "is_configured": true, 00:25:54.088 "data_offset": 2048, 00:25:54.088 "data_size": 63488 00:25:54.088 }, 00:25:54.088 { 00:25:54.088 "name": "BaseBdev3", 00:25:54.088 "uuid": "1534ea08-5043-4dc0-b376-d6a0647ddf46", 00:25:54.088 "is_configured": true, 00:25:54.088 "data_offset": 2048, 00:25:54.088 "data_size": 63488 00:25:54.088 } 00:25:54.088 ] 00:25:54.088 }' 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:54.088 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.656 [2024-10-28 13:37:08.550754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:54.656 "name": "Existed_Raid", 00:25:54.656 "uuid": "8678929b-0b18-400d-b588-f485aad82e90", 00:25:54.656 "strip_size_kb": 64, 00:25:54.656 "state": "configuring", 00:25:54.656 "raid_level": "concat", 00:25:54.656 "superblock": true, 00:25:54.656 "num_base_bdevs": 3, 00:25:54.656 "num_base_bdevs_discovered": 1, 00:25:54.656 "num_base_bdevs_operational": 3, 00:25:54.656 "base_bdevs_list": [ 00:25:54.656 { 00:25:54.656 "name": "BaseBdev1", 00:25:54.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.656 "is_configured": false, 00:25:54.656 "data_offset": 0, 00:25:54.656 "data_size": 0 00:25:54.656 }, 00:25:54.656 { 00:25:54.656 "name": null, 00:25:54.656 "uuid": "afb5a788-fb05-442a-b31c-55b03482f4b1", 00:25:54.656 "is_configured": false, 00:25:54.656 "data_offset": 0, 00:25:54.656 "data_size": 63488 00:25:54.656 }, 00:25:54.656 { 00:25:54.656 "name": "BaseBdev3", 00:25:54.656 "uuid": "1534ea08-5043-4dc0-b376-d6a0647ddf46", 00:25:54.656 "is_configured": true, 00:25:54.656 "data_offset": 2048, 00:25:54.656 "data_size": 63488 00:25:54.656 } 00:25:54.656 ] 00:25:54.656 }' 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:54.656 13:37:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.225 [2024-10-28 13:37:09.152855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:55.225 BaseBdev1 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.225 [ 00:25:55.225 { 00:25:55.225 "name": "BaseBdev1", 00:25:55.225 "aliases": [ 00:25:55.225 "ace9f9c5-e9b5-47ed-beb7-2cab60d00f1a" 00:25:55.225 ], 00:25:55.225 "product_name": "Malloc disk", 00:25:55.225 "block_size": 512, 00:25:55.225 "num_blocks": 65536, 00:25:55.225 "uuid": "ace9f9c5-e9b5-47ed-beb7-2cab60d00f1a", 00:25:55.225 "assigned_rate_limits": { 00:25:55.225 "rw_ios_per_sec": 0, 00:25:55.225 "rw_mbytes_per_sec": 0, 00:25:55.225 "r_mbytes_per_sec": 0, 00:25:55.225 "w_mbytes_per_sec": 0 00:25:55.225 }, 00:25:55.225 "claimed": true, 00:25:55.225 "claim_type": "exclusive_write", 00:25:55.225 "zoned": false, 00:25:55.225 "supported_io_types": { 00:25:55.225 "read": true, 00:25:55.225 "write": true, 00:25:55.225 "unmap": true, 00:25:55.225 "flush": true, 00:25:55.225 "reset": true, 00:25:55.225 "nvme_admin": false, 00:25:55.225 "nvme_io": false, 00:25:55.225 "nvme_io_md": false, 00:25:55.225 "write_zeroes": true, 00:25:55.225 "zcopy": true, 00:25:55.225 "get_zone_info": false, 00:25:55.225 "zone_management": false, 00:25:55.225 "zone_append": false, 00:25:55.225 "compare": false, 00:25:55.225 "compare_and_write": false, 00:25:55.225 "abort": true, 00:25:55.225 "seek_hole": false, 00:25:55.225 "seek_data": false, 00:25:55.225 "copy": true, 00:25:55.225 "nvme_iov_md": false 00:25:55.225 }, 00:25:55.225 "memory_domains": [ 00:25:55.225 { 00:25:55.225 "dma_device_id": "system", 00:25:55.225 "dma_device_type": 1 00:25:55.225 }, 00:25:55.225 { 00:25:55.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.225 "dma_device_type": 2 00:25:55.225 } 00:25:55.225 ], 00:25:55.225 "driver_specific": {} 00:25:55.225 } 00:25:55.225 ] 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.225 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:55.225 "name": "Existed_Raid", 00:25:55.225 "uuid": "8678929b-0b18-400d-b588-f485aad82e90", 00:25:55.225 "strip_size_kb": 64, 00:25:55.225 "state": "configuring", 00:25:55.225 "raid_level": "concat", 00:25:55.225 "superblock": true, 00:25:55.225 "num_base_bdevs": 3, 00:25:55.225 "num_base_bdevs_discovered": 2, 00:25:55.225 "num_base_bdevs_operational": 3, 00:25:55.225 "base_bdevs_list": [ 00:25:55.225 { 00:25:55.225 "name": "BaseBdev1", 00:25:55.225 "uuid": "ace9f9c5-e9b5-47ed-beb7-2cab60d00f1a", 00:25:55.225 "is_configured": true, 00:25:55.225 "data_offset": 2048, 00:25:55.225 "data_size": 63488 00:25:55.225 }, 00:25:55.226 { 00:25:55.226 "name": null, 00:25:55.226 "uuid": "afb5a788-fb05-442a-b31c-55b03482f4b1", 00:25:55.226 "is_configured": false, 00:25:55.226 "data_offset": 0, 00:25:55.226 "data_size": 63488 00:25:55.226 }, 00:25:55.226 { 00:25:55.226 "name": "BaseBdev3", 00:25:55.226 "uuid": "1534ea08-5043-4dc0-b376-d6a0647ddf46", 00:25:55.226 "is_configured": true, 00:25:55.226 "data_offset": 2048, 00:25:55.226 "data_size": 63488 00:25:55.226 } 00:25:55.226 ] 00:25:55.226 }' 00:25:55.226 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:55.226 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.792 [2024-10-28 13:37:09.733226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:55.792 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:55.793 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:55.793 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:55.793 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:55.793 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:55.793 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:55.793 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.793 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.793 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.793 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.793 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:55.793 "name": "Existed_Raid", 00:25:55.793 "uuid": "8678929b-0b18-400d-b588-f485aad82e90", 00:25:55.793 "strip_size_kb": 64, 00:25:55.793 "state": "configuring", 00:25:55.793 "raid_level": "concat", 00:25:55.793 "superblock": true, 00:25:55.793 "num_base_bdevs": 3, 00:25:55.793 "num_base_bdevs_discovered": 1, 00:25:55.793 "num_base_bdevs_operational": 3, 00:25:55.793 "base_bdevs_list": [ 00:25:55.793 { 00:25:55.793 "name": "BaseBdev1", 00:25:55.793 "uuid": "ace9f9c5-e9b5-47ed-beb7-2cab60d00f1a", 00:25:55.793 "is_configured": true, 00:25:55.793 "data_offset": 2048, 00:25:55.793 "data_size": 63488 00:25:55.793 }, 00:25:55.793 { 00:25:55.793 "name": null, 00:25:55.793 "uuid": "afb5a788-fb05-442a-b31c-55b03482f4b1", 00:25:55.793 "is_configured": false, 00:25:55.793 "data_offset": 0, 00:25:55.793 "data_size": 63488 00:25:55.793 }, 00:25:55.793 { 00:25:55.793 "name": null, 00:25:55.793 "uuid": "1534ea08-5043-4dc0-b376-d6a0647ddf46", 00:25:55.793 "is_configured": false, 00:25:55.793 "data_offset": 0, 00:25:55.793 "data_size": 63488 00:25:55.793 } 00:25:55.793 ] 00:25:55.793 }' 00:25:55.793 13:37:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:55.793 13:37:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.359 [2024-10-28 13:37:10.337482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:56.359 "name": "Existed_Raid", 00:25:56.359 "uuid": "8678929b-0b18-400d-b588-f485aad82e90", 00:25:56.359 "strip_size_kb": 64, 00:25:56.359 "state": "configuring", 00:25:56.359 "raid_level": "concat", 00:25:56.359 "superblock": true, 00:25:56.359 "num_base_bdevs": 3, 00:25:56.359 "num_base_bdevs_discovered": 2, 00:25:56.359 "num_base_bdevs_operational": 3, 00:25:56.359 "base_bdevs_list": [ 00:25:56.359 { 00:25:56.359 "name": "BaseBdev1", 00:25:56.359 "uuid": "ace9f9c5-e9b5-47ed-beb7-2cab60d00f1a", 00:25:56.359 "is_configured": true, 00:25:56.359 "data_offset": 2048, 00:25:56.359 "data_size": 63488 00:25:56.359 }, 00:25:56.359 { 00:25:56.359 "name": null, 00:25:56.359 "uuid": "afb5a788-fb05-442a-b31c-55b03482f4b1", 00:25:56.359 "is_configured": false, 00:25:56.359 "data_offset": 0, 00:25:56.359 "data_size": 63488 00:25:56.359 }, 00:25:56.359 { 00:25:56.359 "name": "BaseBdev3", 00:25:56.359 "uuid": "1534ea08-5043-4dc0-b376-d6a0647ddf46", 00:25:56.359 "is_configured": true, 00:25:56.359 "data_offset": 2048, 00:25:56.359 "data_size": 63488 00:25:56.359 } 00:25:56.359 ] 00:25:56.359 }' 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:56.359 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.928 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.928 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.928 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.928 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:56.928 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.928 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:56.928 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.929 [2024-10-28 13:37:10.957644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.929 13:37:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.929 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:56.929 "name": "Existed_Raid", 00:25:56.929 "uuid": "8678929b-0b18-400d-b588-f485aad82e90", 00:25:56.929 "strip_size_kb": 64, 00:25:56.929 "state": "configuring", 00:25:56.929 "raid_level": "concat", 00:25:56.929 "superblock": true, 00:25:56.929 "num_base_bdevs": 3, 00:25:56.929 "num_base_bdevs_discovered": 1, 00:25:56.929 "num_base_bdevs_operational": 3, 00:25:56.929 "base_bdevs_list": [ 00:25:56.929 { 00:25:56.929 "name": null, 00:25:56.929 "uuid": "ace9f9c5-e9b5-47ed-beb7-2cab60d00f1a", 00:25:56.929 "is_configured": false, 00:25:56.929 "data_offset": 0, 00:25:56.929 "data_size": 63488 00:25:56.929 }, 00:25:56.929 { 00:25:56.929 "name": null, 00:25:56.929 "uuid": "afb5a788-fb05-442a-b31c-55b03482f4b1", 00:25:56.929 "is_configured": false, 00:25:56.929 "data_offset": 0, 00:25:56.929 "data_size": 63488 00:25:56.929 }, 00:25:56.929 { 00:25:56.929 "name": "BaseBdev3", 00:25:56.929 "uuid": "1534ea08-5043-4dc0-b376-d6a0647ddf46", 00:25:56.929 "is_configured": true, 00:25:56.929 "data_offset": 2048, 00:25:56.929 "data_size": 63488 00:25:56.929 } 00:25:56.929 ] 00:25:56.929 }' 00:25:56.929 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:56.929 13:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.497 [2024-10-28 13:37:11.527808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.497 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:57.497 "name": "Existed_Raid", 00:25:57.497 "uuid": "8678929b-0b18-400d-b588-f485aad82e90", 00:25:57.497 "strip_size_kb": 64, 00:25:57.497 "state": "configuring", 00:25:57.497 "raid_level": "concat", 00:25:57.497 "superblock": true, 00:25:57.497 "num_base_bdevs": 3, 00:25:57.497 "num_base_bdevs_discovered": 2, 00:25:57.497 "num_base_bdevs_operational": 3, 00:25:57.497 "base_bdevs_list": [ 00:25:57.497 { 00:25:57.497 "name": null, 00:25:57.497 "uuid": "ace9f9c5-e9b5-47ed-beb7-2cab60d00f1a", 00:25:57.497 "is_configured": false, 00:25:57.497 "data_offset": 0, 00:25:57.497 "data_size": 63488 00:25:57.497 }, 00:25:57.497 { 00:25:57.497 "name": "BaseBdev2", 00:25:57.497 "uuid": "afb5a788-fb05-442a-b31c-55b03482f4b1", 00:25:57.497 "is_configured": true, 00:25:57.497 "data_offset": 2048, 00:25:57.497 "data_size": 63488 00:25:57.497 }, 00:25:57.497 { 00:25:57.497 "name": "BaseBdev3", 00:25:57.498 "uuid": "1534ea08-5043-4dc0-b376-d6a0647ddf46", 00:25:57.498 "is_configured": true, 00:25:57.498 "data_offset": 2048, 00:25:57.498 "data_size": 63488 00:25:57.498 } 00:25:57.498 ] 00:25:57.498 }' 00:25:57.498 13:37:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:57.498 13:37:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ace9f9c5-e9b5-47ed-beb7-2cab60d00f1a 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.065 [2024-10-28 13:37:12.181673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:58.065 [2024-10-28 13:37:12.181936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:58.065 [2024-10-28 13:37:12.181954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:58.065 NewBaseBdev 00:25:58.065 [2024-10-28 13:37:12.182279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:25:58.065 [2024-10-28 13:37:12.182428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:58.065 [2024-10-28 13:37:12.182454] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:58.065 [2024-10-28 13:37:12.182583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.065 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.066 [ 00:25:58.066 { 00:25:58.066 "name": "NewBaseBdev", 00:25:58.066 "aliases": [ 00:25:58.066 "ace9f9c5-e9b5-47ed-beb7-2cab60d00f1a" 00:25:58.066 ], 00:25:58.066 "product_name": "Malloc disk", 00:25:58.066 "block_size": 512, 00:25:58.066 "num_blocks": 65536, 00:25:58.066 "uuid": "ace9f9c5-e9b5-47ed-beb7-2cab60d00f1a", 00:25:58.066 "assigned_rate_limits": { 00:25:58.066 "rw_ios_per_sec": 0, 00:25:58.066 "rw_mbytes_per_sec": 0, 00:25:58.066 "r_mbytes_per_sec": 0, 00:25:58.066 "w_mbytes_per_sec": 0 00:25:58.066 }, 00:25:58.066 "claimed": true, 00:25:58.066 "claim_type": "exclusive_write", 00:25:58.066 "zoned": false, 00:25:58.066 "supported_io_types": { 00:25:58.066 "read": true, 00:25:58.066 "write": true, 00:25:58.066 "unmap": true, 00:25:58.066 "flush": true, 00:25:58.066 "reset": true, 00:25:58.066 "nvme_admin": false, 00:25:58.066 "nvme_io": false, 00:25:58.066 "nvme_io_md": false, 00:25:58.066 "write_zeroes": true, 00:25:58.066 "zcopy": true, 00:25:58.066 "get_zone_info": false, 00:25:58.066 "zone_management": false, 00:25:58.066 "zone_append": false, 00:25:58.066 "compare": false, 00:25:58.066 "compare_and_write": false, 00:25:58.066 "abort": true, 00:25:58.066 "seek_hole": false, 00:25:58.066 "seek_data": false, 00:25:58.066 "copy": true, 00:25:58.066 "nvme_iov_md": false 00:25:58.066 }, 00:25:58.066 "memory_domains": [ 00:25:58.066 { 00:25:58.066 "dma_device_id": "system", 00:25:58.066 "dma_device_type": 1 00:25:58.066 }, 00:25:58.066 { 00:25:58.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.066 "dma_device_type": 2 00:25:58.066 } 00:25:58.066 ], 00:25:58.066 "driver_specific": {} 00:25:58.066 } 00:25:58.066 ] 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.066 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.324 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.324 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:58.324 "name": "Existed_Raid", 00:25:58.324 "uuid": "8678929b-0b18-400d-b588-f485aad82e90", 00:25:58.324 "strip_size_kb": 64, 00:25:58.324 "state": "online", 00:25:58.324 "raid_level": "concat", 00:25:58.324 "superblock": true, 00:25:58.324 "num_base_bdevs": 3, 00:25:58.324 "num_base_bdevs_discovered": 3, 00:25:58.324 "num_base_bdevs_operational": 3, 00:25:58.324 "base_bdevs_list": [ 00:25:58.324 { 00:25:58.324 "name": "NewBaseBdev", 00:25:58.324 "uuid": "ace9f9c5-e9b5-47ed-beb7-2cab60d00f1a", 00:25:58.324 "is_configured": true, 00:25:58.324 "data_offset": 2048, 00:25:58.324 "data_size": 63488 00:25:58.324 }, 00:25:58.324 { 00:25:58.324 "name": "BaseBdev2", 00:25:58.324 "uuid": "afb5a788-fb05-442a-b31c-55b03482f4b1", 00:25:58.324 "is_configured": true, 00:25:58.324 "data_offset": 2048, 00:25:58.324 "data_size": 63488 00:25:58.324 }, 00:25:58.324 { 00:25:58.324 "name": "BaseBdev3", 00:25:58.324 "uuid": "1534ea08-5043-4dc0-b376-d6a0647ddf46", 00:25:58.324 "is_configured": true, 00:25:58.324 "data_offset": 2048, 00:25:58.324 "data_size": 63488 00:25:58.324 } 00:25:58.324 ] 00:25:58.324 }' 00:25:58.324 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:58.324 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.583 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:58.583 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:58.583 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:58.583 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:58.583 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:58.583 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:58.583 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:58.583 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.583 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.583 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:58.842 [2024-10-28 13:37:12.742353] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:58.842 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.842 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:58.842 "name": "Existed_Raid", 00:25:58.842 "aliases": [ 00:25:58.842 "8678929b-0b18-400d-b588-f485aad82e90" 00:25:58.842 ], 00:25:58.842 "product_name": "Raid Volume", 00:25:58.842 "block_size": 512, 00:25:58.842 "num_blocks": 190464, 00:25:58.842 "uuid": "8678929b-0b18-400d-b588-f485aad82e90", 00:25:58.842 "assigned_rate_limits": { 00:25:58.842 "rw_ios_per_sec": 0, 00:25:58.842 "rw_mbytes_per_sec": 0, 00:25:58.842 "r_mbytes_per_sec": 0, 00:25:58.842 "w_mbytes_per_sec": 0 00:25:58.842 }, 00:25:58.842 "claimed": false, 00:25:58.842 "zoned": false, 00:25:58.842 "supported_io_types": { 00:25:58.842 "read": true, 00:25:58.842 "write": true, 00:25:58.842 "unmap": true, 00:25:58.842 "flush": true, 00:25:58.842 "reset": true, 00:25:58.842 "nvme_admin": false, 00:25:58.842 "nvme_io": false, 00:25:58.842 "nvme_io_md": false, 00:25:58.842 "write_zeroes": true, 00:25:58.842 "zcopy": false, 00:25:58.842 "get_zone_info": false, 00:25:58.842 "zone_management": false, 00:25:58.842 "zone_append": false, 00:25:58.842 "compare": false, 00:25:58.842 "compare_and_write": false, 00:25:58.842 "abort": false, 00:25:58.842 "seek_hole": false, 00:25:58.842 "seek_data": false, 00:25:58.842 "copy": false, 00:25:58.842 "nvme_iov_md": false 00:25:58.842 }, 00:25:58.842 "memory_domains": [ 00:25:58.842 { 00:25:58.842 "dma_device_id": "system", 00:25:58.842 "dma_device_type": 1 00:25:58.842 }, 00:25:58.842 { 00:25:58.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.842 "dma_device_type": 2 00:25:58.842 }, 00:25:58.842 { 00:25:58.842 "dma_device_id": "system", 00:25:58.842 "dma_device_type": 1 00:25:58.842 }, 00:25:58.842 { 00:25:58.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.842 "dma_device_type": 2 00:25:58.842 }, 00:25:58.842 { 00:25:58.842 "dma_device_id": "system", 00:25:58.842 "dma_device_type": 1 00:25:58.842 }, 00:25:58.842 { 00:25:58.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.842 "dma_device_type": 2 00:25:58.842 } 00:25:58.842 ], 00:25:58.842 "driver_specific": { 00:25:58.842 "raid": { 00:25:58.842 "uuid": "8678929b-0b18-400d-b588-f485aad82e90", 00:25:58.842 "strip_size_kb": 64, 00:25:58.842 "state": "online", 00:25:58.842 "raid_level": "concat", 00:25:58.842 "superblock": true, 00:25:58.842 "num_base_bdevs": 3, 00:25:58.842 "num_base_bdevs_discovered": 3, 00:25:58.842 "num_base_bdevs_operational": 3, 00:25:58.842 "base_bdevs_list": [ 00:25:58.842 { 00:25:58.842 "name": "NewBaseBdev", 00:25:58.842 "uuid": "ace9f9c5-e9b5-47ed-beb7-2cab60d00f1a", 00:25:58.842 "is_configured": true, 00:25:58.842 "data_offset": 2048, 00:25:58.842 "data_size": 63488 00:25:58.842 }, 00:25:58.842 { 00:25:58.842 "name": "BaseBdev2", 00:25:58.842 "uuid": "afb5a788-fb05-442a-b31c-55b03482f4b1", 00:25:58.842 "is_configured": true, 00:25:58.842 "data_offset": 2048, 00:25:58.842 "data_size": 63488 00:25:58.842 }, 00:25:58.842 { 00:25:58.842 "name": "BaseBdev3", 00:25:58.842 "uuid": "1534ea08-5043-4dc0-b376-d6a0647ddf46", 00:25:58.842 "is_configured": true, 00:25:58.842 "data_offset": 2048, 00:25:58.842 "data_size": 63488 00:25:58.842 } 00:25:58.842 ] 00:25:58.842 } 00:25:58.842 } 00:25:58.842 }' 00:25:58.842 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:58.842 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:58.842 BaseBdev2 00:25:58.842 BaseBdev3' 00:25:58.842 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:58.842 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:58.842 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:58.842 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:58.842 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:58.842 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.843 13:37:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.102 [2024-10-28 13:37:13.070006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:59.102 [2024-10-28 13:37:13.070050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:59.102 [2024-10-28 13:37:13.070177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:59.102 [2024-10-28 13:37:13.070259] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:59.102 [2024-10-28 13:37:13.070278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79061 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79061 ']' 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 79061 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79061 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:59.102 killing process with pid 79061 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79061' 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 79061 00:25:59.102 [2024-10-28 13:37:13.121744] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:59.102 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 79061 00:25:59.102 [2024-10-28 13:37:13.152491] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:59.361 13:37:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:25:59.361 00:25:59.361 real 0m10.357s 00:25:59.361 user 0m18.211s 00:25:59.361 sys 0m1.678s 00:25:59.361 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:59.361 13:37:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.361 ************************************ 00:25:59.361 END TEST raid_state_function_test_sb 00:25:59.361 ************************************ 00:25:59.361 13:37:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:25:59.361 13:37:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:59.361 13:37:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:59.361 13:37:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:59.361 ************************************ 00:25:59.361 START TEST raid_superblock_test 00:25:59.361 ************************************ 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:25:59.361 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79676 00:25:59.362 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79676 00:25:59.362 13:37:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:59.362 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79676 ']' 00:25:59.362 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.362 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:59.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.362 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.362 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:59.362 13:37:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.621 [2024-10-28 13:37:13.563467] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:25:59.621 [2024-10-28 13:37:13.563737] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79676 ] 00:25:59.621 [2024-10-28 13:37:13.718475] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:59.621 [2024-10-28 13:37:13.753210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.879 [2024-10-28 13:37:13.807061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.879 [2024-10-28 13:37:13.871788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:59.879 [2024-10-28 13:37:13.871836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.464 malloc1 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.464 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.464 [2024-10-28 13:37:14.559519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:00.465 [2024-10-28 13:37:14.559636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.465 [2024-10-28 13:37:14.559691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:00.465 [2024-10-28 13:37:14.559714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.465 [2024-10-28 13:37:14.563454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.465 [2024-10-28 13:37:14.563502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:00.465 pt1 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.465 malloc2 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.465 [2024-10-28 13:37:14.589580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:00.465 [2024-10-28 13:37:14.589660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.465 [2024-10-28 13:37:14.589693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:00.465 [2024-10-28 13:37:14.589711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.465 [2024-10-28 13:37:14.593331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.465 [2024-10-28 13:37:14.593381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:00.465 pt2 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.465 malloc3 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.465 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.465 [2024-10-28 13:37:14.619388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:00.465 [2024-10-28 13:37:14.619458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.465 [2024-10-28 13:37:14.619495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:00.465 [2024-10-28 13:37:14.619516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.723 [2024-10-28 13:37:14.623152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.723 [2024-10-28 13:37:14.623201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:00.723 pt3 00:26:00.723 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.723 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:00.723 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:00.723 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:26:00.723 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.723 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.723 [2024-10-28 13:37:14.631412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:00.723 [2024-10-28 13:37:14.634082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:00.723 [2024-10-28 13:37:14.634217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:00.724 [2024-10-28 13:37:14.634411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:26:00.724 [2024-10-28 13:37:14.634434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:00.724 [2024-10-28 13:37:14.634762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:00.724 [2024-10-28 13:37:14.634958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:26:00.724 [2024-10-28 13:37:14.634984] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:26:00.724 [2024-10-28 13:37:14.635171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:00.724 "name": "raid_bdev1", 00:26:00.724 "uuid": "35c936aa-ff1a-460d-bafc-4eda36cc1aef", 00:26:00.724 "strip_size_kb": 64, 00:26:00.724 "state": "online", 00:26:00.724 "raid_level": "concat", 00:26:00.724 "superblock": true, 00:26:00.724 "num_base_bdevs": 3, 00:26:00.724 "num_base_bdevs_discovered": 3, 00:26:00.724 "num_base_bdevs_operational": 3, 00:26:00.724 "base_bdevs_list": [ 00:26:00.724 { 00:26:00.724 "name": "pt1", 00:26:00.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:00.724 "is_configured": true, 00:26:00.724 "data_offset": 2048, 00:26:00.724 "data_size": 63488 00:26:00.724 }, 00:26:00.724 { 00:26:00.724 "name": "pt2", 00:26:00.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:00.724 "is_configured": true, 00:26:00.724 "data_offset": 2048, 00:26:00.724 "data_size": 63488 00:26:00.724 }, 00:26:00.724 { 00:26:00.724 "name": "pt3", 00:26:00.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:00.724 "is_configured": true, 00:26:00.724 "data_offset": 2048, 00:26:00.724 "data_size": 63488 00:26:00.724 } 00:26:00.724 ] 00:26:00.724 }' 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:00.724 13:37:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.291 [2024-10-28 13:37:15.168017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:01.291 "name": "raid_bdev1", 00:26:01.291 "aliases": [ 00:26:01.291 "35c936aa-ff1a-460d-bafc-4eda36cc1aef" 00:26:01.291 ], 00:26:01.291 "product_name": "Raid Volume", 00:26:01.291 "block_size": 512, 00:26:01.291 "num_blocks": 190464, 00:26:01.291 "uuid": "35c936aa-ff1a-460d-bafc-4eda36cc1aef", 00:26:01.291 "assigned_rate_limits": { 00:26:01.291 "rw_ios_per_sec": 0, 00:26:01.291 "rw_mbytes_per_sec": 0, 00:26:01.291 "r_mbytes_per_sec": 0, 00:26:01.291 "w_mbytes_per_sec": 0 00:26:01.291 }, 00:26:01.291 "claimed": false, 00:26:01.291 "zoned": false, 00:26:01.291 "supported_io_types": { 00:26:01.291 "read": true, 00:26:01.291 "write": true, 00:26:01.291 "unmap": true, 00:26:01.291 "flush": true, 00:26:01.291 "reset": true, 00:26:01.291 "nvme_admin": false, 00:26:01.291 "nvme_io": false, 00:26:01.291 "nvme_io_md": false, 00:26:01.291 "write_zeroes": true, 00:26:01.291 "zcopy": false, 00:26:01.291 "get_zone_info": false, 00:26:01.291 "zone_management": false, 00:26:01.291 "zone_append": false, 00:26:01.291 "compare": false, 00:26:01.291 "compare_and_write": false, 00:26:01.291 "abort": false, 00:26:01.291 "seek_hole": false, 00:26:01.291 "seek_data": false, 00:26:01.291 "copy": false, 00:26:01.291 "nvme_iov_md": false 00:26:01.291 }, 00:26:01.291 "memory_domains": [ 00:26:01.291 { 00:26:01.291 "dma_device_id": "system", 00:26:01.291 "dma_device_type": 1 00:26:01.291 }, 00:26:01.291 { 00:26:01.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.291 "dma_device_type": 2 00:26:01.291 }, 00:26:01.291 { 00:26:01.291 "dma_device_id": "system", 00:26:01.291 "dma_device_type": 1 00:26:01.291 }, 00:26:01.291 { 00:26:01.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.291 "dma_device_type": 2 00:26:01.291 }, 00:26:01.291 { 00:26:01.291 "dma_device_id": "system", 00:26:01.291 "dma_device_type": 1 00:26:01.291 }, 00:26:01.291 { 00:26:01.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.291 "dma_device_type": 2 00:26:01.291 } 00:26:01.291 ], 00:26:01.291 "driver_specific": { 00:26:01.291 "raid": { 00:26:01.291 "uuid": "35c936aa-ff1a-460d-bafc-4eda36cc1aef", 00:26:01.291 "strip_size_kb": 64, 00:26:01.291 "state": "online", 00:26:01.291 "raid_level": "concat", 00:26:01.291 "superblock": true, 00:26:01.291 "num_base_bdevs": 3, 00:26:01.291 "num_base_bdevs_discovered": 3, 00:26:01.291 "num_base_bdevs_operational": 3, 00:26:01.291 "base_bdevs_list": [ 00:26:01.291 { 00:26:01.291 "name": "pt1", 00:26:01.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:01.291 "is_configured": true, 00:26:01.291 "data_offset": 2048, 00:26:01.291 "data_size": 63488 00:26:01.291 }, 00:26:01.291 { 00:26:01.291 "name": "pt2", 00:26:01.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:01.291 "is_configured": true, 00:26:01.291 "data_offset": 2048, 00:26:01.291 "data_size": 63488 00:26:01.291 }, 00:26:01.291 { 00:26:01.291 "name": "pt3", 00:26:01.291 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:01.291 "is_configured": true, 00:26:01.291 "data_offset": 2048, 00:26:01.291 "data_size": 63488 00:26:01.291 } 00:26:01.291 ] 00:26:01.291 } 00:26:01.291 } 00:26:01.291 }' 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:01.291 pt2 00:26:01.291 pt3' 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:01.291 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:01.292 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:01.292 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.292 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.292 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:01.292 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.292 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:01.292 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:01.292 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:01.292 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:01.292 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:01.292 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.292 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.551 [2024-10-28 13:37:15.492103] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=35c936aa-ff1a-460d-bafc-4eda36cc1aef 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 35c936aa-ff1a-460d-bafc-4eda36cc1aef ']' 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.551 [2024-10-28 13:37:15.539711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:01.551 [2024-10-28 13:37:15.539750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:01.551 [2024-10-28 13:37:15.539849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:01.551 [2024-10-28 13:37:15.539956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:01.551 [2024-10-28 13:37:15.539973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.551 [2024-10-28 13:37:15.691811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:01.551 [2024-10-28 13:37:15.694902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:01.551 [2024-10-28 13:37:15.695123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:01.551 [2024-10-28 13:37:15.695221] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:01.551 [2024-10-28 13:37:15.695288] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:01.551 [2024-10-28 13:37:15.695319] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:01.551 [2024-10-28 13:37:15.695343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:01.551 [2024-10-28 13:37:15.695362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:26:01.551 request: 00:26:01.551 { 00:26:01.551 "name": "raid_bdev1", 00:26:01.551 "raid_level": "concat", 00:26:01.551 "base_bdevs": [ 00:26:01.551 "malloc1", 00:26:01.551 "malloc2", 00:26:01.551 "malloc3" 00:26:01.551 ], 00:26:01.551 "strip_size_kb": 64, 00:26:01.551 "superblock": false, 00:26:01.551 "method": "bdev_raid_create", 00:26:01.551 "req_id": 1 00:26:01.551 } 00:26:01.551 Got JSON-RPC error response 00:26:01.551 response: 00:26:01.551 { 00:26:01.551 "code": -17, 00:26:01.551 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:01.551 } 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.551 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.810 [2024-10-28 13:37:15.755877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:01.810 [2024-10-28 13:37:15.756192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:01.810 [2024-10-28 13:37:15.756271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:01.810 [2024-10-28 13:37:15.756484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:01.810 [2024-10-28 13:37:15.759925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:01.810 [2024-10-28 13:37:15.760150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:01.810 [2024-10-28 13:37:15.760385] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:01.810 [2024-10-28 13:37:15.760546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:01.810 pt1 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:01.810 "name": "raid_bdev1", 00:26:01.810 "uuid": "35c936aa-ff1a-460d-bafc-4eda36cc1aef", 00:26:01.810 "strip_size_kb": 64, 00:26:01.810 "state": "configuring", 00:26:01.810 "raid_level": "concat", 00:26:01.810 "superblock": true, 00:26:01.810 "num_base_bdevs": 3, 00:26:01.810 "num_base_bdevs_discovered": 1, 00:26:01.810 "num_base_bdevs_operational": 3, 00:26:01.810 "base_bdevs_list": [ 00:26:01.810 { 00:26:01.810 "name": "pt1", 00:26:01.810 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:01.810 "is_configured": true, 00:26:01.810 "data_offset": 2048, 00:26:01.810 "data_size": 63488 00:26:01.810 }, 00:26:01.810 { 00:26:01.810 "name": null, 00:26:01.810 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:01.810 "is_configured": false, 00:26:01.810 "data_offset": 2048, 00:26:01.810 "data_size": 63488 00:26:01.810 }, 00:26:01.810 { 00:26:01.810 "name": null, 00:26:01.810 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:01.810 "is_configured": false, 00:26:01.810 "data_offset": 2048, 00:26:01.810 "data_size": 63488 00:26:01.810 } 00:26:01.810 ] 00:26:01.810 }' 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:01.810 13:37:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.410 [2024-10-28 13:37:16.280675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:02.410 [2024-10-28 13:37:16.281271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.410 [2024-10-28 13:37:16.281326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:02.410 [2024-10-28 13:37:16.281344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.410 [2024-10-28 13:37:16.281913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.410 [2024-10-28 13:37:16.281943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:02.410 [2024-10-28 13:37:16.282040] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:02.410 [2024-10-28 13:37:16.282070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:02.410 pt2 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.410 [2024-10-28 13:37:16.288666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.410 "name": "raid_bdev1", 00:26:02.410 "uuid": "35c936aa-ff1a-460d-bafc-4eda36cc1aef", 00:26:02.410 "strip_size_kb": 64, 00:26:02.410 "state": "configuring", 00:26:02.410 "raid_level": "concat", 00:26:02.410 "superblock": true, 00:26:02.410 "num_base_bdevs": 3, 00:26:02.410 "num_base_bdevs_discovered": 1, 00:26:02.410 "num_base_bdevs_operational": 3, 00:26:02.410 "base_bdevs_list": [ 00:26:02.410 { 00:26:02.410 "name": "pt1", 00:26:02.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:02.410 "is_configured": true, 00:26:02.410 "data_offset": 2048, 00:26:02.410 "data_size": 63488 00:26:02.410 }, 00:26:02.410 { 00:26:02.410 "name": null, 00:26:02.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:02.410 "is_configured": false, 00:26:02.410 "data_offset": 0, 00:26:02.410 "data_size": 63488 00:26:02.410 }, 00:26:02.410 { 00:26:02.410 "name": null, 00:26:02.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:02.410 "is_configured": false, 00:26:02.410 "data_offset": 2048, 00:26:02.410 "data_size": 63488 00:26:02.410 } 00:26:02.410 ] 00:26:02.410 }' 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.410 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.669 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:02.669 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:02.669 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:02.669 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.669 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.927 [2024-10-28 13:37:16.832848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:02.927 [2024-10-28 13:37:16.833116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.927 [2024-10-28 13:37:16.833225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:02.927 [2024-10-28 13:37:16.833468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.927 [2024-10-28 13:37:16.834075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.927 [2024-10-28 13:37:16.834125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:02.927 [2024-10-28 13:37:16.834268] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:02.927 [2024-10-28 13:37:16.834310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:02.927 pt2 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.927 [2024-10-28 13:37:16.840767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:02.927 [2024-10-28 13:37:16.840989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.927 [2024-10-28 13:37:16.841175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:02.927 [2024-10-28 13:37:16.841353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.927 [2024-10-28 13:37:16.841958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.927 [2024-10-28 13:37:16.842116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:02.927 [2024-10-28 13:37:16.842317] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:02.927 [2024-10-28 13:37:16.842496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:02.927 [2024-10-28 13:37:16.842748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:26:02.927 [2024-10-28 13:37:16.842890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:02.927 [2024-10-28 13:37:16.843272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:26:02.927 [2024-10-28 13:37:16.843563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:26:02.927 [2024-10-28 13:37:16.843673] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:26:02.927 [2024-10-28 13:37:16.843994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:02.927 pt3 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.927 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.928 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.928 "name": "raid_bdev1", 00:26:02.928 "uuid": "35c936aa-ff1a-460d-bafc-4eda36cc1aef", 00:26:02.928 "strip_size_kb": 64, 00:26:02.928 "state": "online", 00:26:02.928 "raid_level": "concat", 00:26:02.928 "superblock": true, 00:26:02.928 "num_base_bdevs": 3, 00:26:02.928 "num_base_bdevs_discovered": 3, 00:26:02.928 "num_base_bdevs_operational": 3, 00:26:02.928 "base_bdevs_list": [ 00:26:02.928 { 00:26:02.928 "name": "pt1", 00:26:02.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:02.928 "is_configured": true, 00:26:02.928 "data_offset": 2048, 00:26:02.928 "data_size": 63488 00:26:02.928 }, 00:26:02.928 { 00:26:02.928 "name": "pt2", 00:26:02.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:02.928 "is_configured": true, 00:26:02.928 "data_offset": 2048, 00:26:02.928 "data_size": 63488 00:26:02.928 }, 00:26:02.928 { 00:26:02.928 "name": "pt3", 00:26:02.928 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:02.928 "is_configured": true, 00:26:02.928 "data_offset": 2048, 00:26:02.928 "data_size": 63488 00:26:02.928 } 00:26:02.928 ] 00:26:02.928 }' 00:26:02.928 13:37:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.928 13:37:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.494 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:03.494 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.495 [2024-10-28 13:37:17.373611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:03.495 "name": "raid_bdev1", 00:26:03.495 "aliases": [ 00:26:03.495 "35c936aa-ff1a-460d-bafc-4eda36cc1aef" 00:26:03.495 ], 00:26:03.495 "product_name": "Raid Volume", 00:26:03.495 "block_size": 512, 00:26:03.495 "num_blocks": 190464, 00:26:03.495 "uuid": "35c936aa-ff1a-460d-bafc-4eda36cc1aef", 00:26:03.495 "assigned_rate_limits": { 00:26:03.495 "rw_ios_per_sec": 0, 00:26:03.495 "rw_mbytes_per_sec": 0, 00:26:03.495 "r_mbytes_per_sec": 0, 00:26:03.495 "w_mbytes_per_sec": 0 00:26:03.495 }, 00:26:03.495 "claimed": false, 00:26:03.495 "zoned": false, 00:26:03.495 "supported_io_types": { 00:26:03.495 "read": true, 00:26:03.495 "write": true, 00:26:03.495 "unmap": true, 00:26:03.495 "flush": true, 00:26:03.495 "reset": true, 00:26:03.495 "nvme_admin": false, 00:26:03.495 "nvme_io": false, 00:26:03.495 "nvme_io_md": false, 00:26:03.495 "write_zeroes": true, 00:26:03.495 "zcopy": false, 00:26:03.495 "get_zone_info": false, 00:26:03.495 "zone_management": false, 00:26:03.495 "zone_append": false, 00:26:03.495 "compare": false, 00:26:03.495 "compare_and_write": false, 00:26:03.495 "abort": false, 00:26:03.495 "seek_hole": false, 00:26:03.495 "seek_data": false, 00:26:03.495 "copy": false, 00:26:03.495 "nvme_iov_md": false 00:26:03.495 }, 00:26:03.495 "memory_domains": [ 00:26:03.495 { 00:26:03.495 "dma_device_id": "system", 00:26:03.495 "dma_device_type": 1 00:26:03.495 }, 00:26:03.495 { 00:26:03.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.495 "dma_device_type": 2 00:26:03.495 }, 00:26:03.495 { 00:26:03.495 "dma_device_id": "system", 00:26:03.495 "dma_device_type": 1 00:26:03.495 }, 00:26:03.495 { 00:26:03.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.495 "dma_device_type": 2 00:26:03.495 }, 00:26:03.495 { 00:26:03.495 "dma_device_id": "system", 00:26:03.495 "dma_device_type": 1 00:26:03.495 }, 00:26:03.495 { 00:26:03.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.495 "dma_device_type": 2 00:26:03.495 } 00:26:03.495 ], 00:26:03.495 "driver_specific": { 00:26:03.495 "raid": { 00:26:03.495 "uuid": "35c936aa-ff1a-460d-bafc-4eda36cc1aef", 00:26:03.495 "strip_size_kb": 64, 00:26:03.495 "state": "online", 00:26:03.495 "raid_level": "concat", 00:26:03.495 "superblock": true, 00:26:03.495 "num_base_bdevs": 3, 00:26:03.495 "num_base_bdevs_discovered": 3, 00:26:03.495 "num_base_bdevs_operational": 3, 00:26:03.495 "base_bdevs_list": [ 00:26:03.495 { 00:26:03.495 "name": "pt1", 00:26:03.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:03.495 "is_configured": true, 00:26:03.495 "data_offset": 2048, 00:26:03.495 "data_size": 63488 00:26:03.495 }, 00:26:03.495 { 00:26:03.495 "name": "pt2", 00:26:03.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:03.495 "is_configured": true, 00:26:03.495 "data_offset": 2048, 00:26:03.495 "data_size": 63488 00:26:03.495 }, 00:26:03.495 { 00:26:03.495 "name": "pt3", 00:26:03.495 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:03.495 "is_configured": true, 00:26:03.495 "data_offset": 2048, 00:26:03.495 "data_size": 63488 00:26:03.495 } 00:26:03.495 ] 00:26:03.495 } 00:26:03.495 } 00:26:03.495 }' 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:03.495 pt2 00:26:03.495 pt3' 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:03.495 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.754 [2024-10-28 13:37:17.666018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 35c936aa-ff1a-460d-bafc-4eda36cc1aef '!=' 35c936aa-ff1a-460d-bafc-4eda36cc1aef ']' 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79676 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79676 ']' 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79676 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79676 00:26:03.754 killing process with pid 79676 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79676' 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79676 00:26:03.754 13:37:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79676 00:26:03.754 [2024-10-28 13:37:17.749461] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:03.754 [2024-10-28 13:37:17.749713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:03.754 [2024-10-28 13:37:17.749817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:03.754 [2024-10-28 13:37:17.749863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:26:03.754 [2024-10-28 13:37:17.817245] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:04.013 13:37:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:26:04.013 00:26:04.013 real 0m4.696s 00:26:04.013 user 0m7.588s 00:26:04.013 sys 0m0.844s 00:26:04.013 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:04.013 ************************************ 00:26:04.013 END TEST raid_superblock_test 00:26:04.013 ************************************ 00:26:04.013 13:37:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.272 13:37:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:26:04.272 13:37:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:04.272 13:37:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:04.272 13:37:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:04.272 ************************************ 00:26:04.272 START TEST raid_read_error_test 00:26:04.272 ************************************ 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kTa45IoUmV 00:26:04.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79929 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79929 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 79929 ']' 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:04.272 13:37:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.272 [2024-10-28 13:37:18.332714] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:26:04.272 [2024-10-28 13:37:18.333240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79929 ] 00:26:04.530 [2024-10-28 13:37:18.487914] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:04.530 [2024-10-28 13:37:18.518225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.530 [2024-10-28 13:37:18.594814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.530 [2024-10-28 13:37:18.681284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:04.530 [2024-10-28 13:37:18.681352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.465 BaseBdev1_malloc 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.465 true 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.465 [2024-10-28 13:37:19.371980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:05.465 [2024-10-28 13:37:19.372470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:05.465 [2024-10-28 13:37:19.372523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:05.465 [2024-10-28 13:37:19.372550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:05.465 [2024-10-28 13:37:19.376243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:05.465 [2024-10-28 13:37:19.376460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:05.465 BaseBdev1 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.465 BaseBdev2_malloc 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.465 true 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.465 [2024-10-28 13:37:19.414317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:05.465 [2024-10-28 13:37:19.414633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:05.465 [2024-10-28 13:37:19.414691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:05.465 [2024-10-28 13:37:19.414715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:05.465 [2024-10-28 13:37:19.418260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:05.465 [2024-10-28 13:37:19.418328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:05.465 BaseBdev2 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.465 BaseBdev3_malloc 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.465 true 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.465 [2024-10-28 13:37:19.457040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:05.465 [2024-10-28 13:37:19.457420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:05.465 [2024-10-28 13:37:19.457461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:05.465 [2024-10-28 13:37:19.457483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:05.465 [2024-10-28 13:37:19.460932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:05.465 [2024-10-28 13:37:19.460990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:05.465 BaseBdev3 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.465 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.465 [2024-10-28 13:37:19.465315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:05.465 [2024-10-28 13:37:19.468572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:05.465 [2024-10-28 13:37:19.468857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:05.465 [2024-10-28 13:37:19.469404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:05.466 [2024-10-28 13:37:19.469571] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:05.466 [2024-10-28 13:37:19.469992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:05.466 [2024-10-28 13:37:19.470376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:05.466 [2024-10-28 13:37:19.470401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:05.466 [2024-10-28 13:37:19.470654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:05.466 "name": "raid_bdev1", 00:26:05.466 "uuid": "1a6743d9-bfc4-4e6a-bfb5-7f2ceaf4d57d", 00:26:05.466 "strip_size_kb": 64, 00:26:05.466 "state": "online", 00:26:05.466 "raid_level": "concat", 00:26:05.466 "superblock": true, 00:26:05.466 "num_base_bdevs": 3, 00:26:05.466 "num_base_bdevs_discovered": 3, 00:26:05.466 "num_base_bdevs_operational": 3, 00:26:05.466 "base_bdevs_list": [ 00:26:05.466 { 00:26:05.466 "name": "BaseBdev1", 00:26:05.466 "uuid": "05ec0c0b-31d8-5d37-a8e5-26b199d0514a", 00:26:05.466 "is_configured": true, 00:26:05.466 "data_offset": 2048, 00:26:05.466 "data_size": 63488 00:26:05.466 }, 00:26:05.466 { 00:26:05.466 "name": "BaseBdev2", 00:26:05.466 "uuid": "f8dad694-68bc-5080-8d18-6ed69769abd2", 00:26:05.466 "is_configured": true, 00:26:05.466 "data_offset": 2048, 00:26:05.466 "data_size": 63488 00:26:05.466 }, 00:26:05.466 { 00:26:05.466 "name": "BaseBdev3", 00:26:05.466 "uuid": "8a11ba38-d953-5b5b-b7df-9c021e2e9da7", 00:26:05.466 "is_configured": true, 00:26:05.466 "data_offset": 2048, 00:26:05.466 "data_size": 63488 00:26:05.466 } 00:26:05.466 ] 00:26:05.466 }' 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:05.466 13:37:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.033 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:06.033 13:37:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:06.033 [2024-10-28 13:37:20.126421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:07.000 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:07.001 13:37:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.001 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.001 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.001 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.001 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:07.001 "name": "raid_bdev1", 00:26:07.001 "uuid": "1a6743d9-bfc4-4e6a-bfb5-7f2ceaf4d57d", 00:26:07.001 "strip_size_kb": 64, 00:26:07.001 "state": "online", 00:26:07.001 "raid_level": "concat", 00:26:07.001 "superblock": true, 00:26:07.001 "num_base_bdevs": 3, 00:26:07.001 "num_base_bdevs_discovered": 3, 00:26:07.001 "num_base_bdevs_operational": 3, 00:26:07.001 "base_bdevs_list": [ 00:26:07.001 { 00:26:07.001 "name": "BaseBdev1", 00:26:07.001 "uuid": "05ec0c0b-31d8-5d37-a8e5-26b199d0514a", 00:26:07.001 "is_configured": true, 00:26:07.001 "data_offset": 2048, 00:26:07.001 "data_size": 63488 00:26:07.001 }, 00:26:07.001 { 00:26:07.001 "name": "BaseBdev2", 00:26:07.001 "uuid": "f8dad694-68bc-5080-8d18-6ed69769abd2", 00:26:07.001 "is_configured": true, 00:26:07.001 "data_offset": 2048, 00:26:07.001 "data_size": 63488 00:26:07.001 }, 00:26:07.001 { 00:26:07.001 "name": "BaseBdev3", 00:26:07.001 "uuid": "8a11ba38-d953-5b5b-b7df-9c021e2e9da7", 00:26:07.001 "is_configured": true, 00:26:07.001 "data_offset": 2048, 00:26:07.001 "data_size": 63488 00:26:07.001 } 00:26:07.001 ] 00:26:07.001 }' 00:26:07.001 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:07.001 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.568 [2024-10-28 13:37:21.537828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:07.568 [2024-10-28 13:37:21.537921] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:07.568 [2024-10-28 13:37:21.541368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:07.568 [2024-10-28 13:37:21.541444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:07.568 [2024-10-28 13:37:21.541512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:07.568 [2024-10-28 13:37:21.541531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:07.568 { 00:26:07.568 "results": [ 00:26:07.568 { 00:26:07.568 "job": "raid_bdev1", 00:26:07.568 "core_mask": "0x1", 00:26:07.568 "workload": "randrw", 00:26:07.568 "percentage": 50, 00:26:07.568 "status": "finished", 00:26:07.568 "queue_depth": 1, 00:26:07.568 "io_size": 131072, 00:26:07.568 "runtime": 1.408307, 00:26:07.568 "iops": 9201.11879015016, 00:26:07.568 "mibps": 1150.13984876877, 00:26:07.568 "io_failed": 1, 00:26:07.568 "io_timeout": 0, 00:26:07.568 "avg_latency_us": 152.71581056338522, 00:26:07.568 "min_latency_us": 39.09818181818182, 00:26:07.568 "max_latency_us": 2383.1272727272726 00:26:07.568 } 00:26:07.568 ], 00:26:07.568 "core_count": 1 00:26:07.568 } 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79929 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 79929 ']' 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 79929 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79929 00:26:07.568 killing process with pid 79929 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79929' 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 79929 00:26:07.568 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 79929 00:26:07.568 [2024-10-28 13:37:21.580785] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:07.568 [2024-10-28 13:37:21.629787] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:07.827 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kTa45IoUmV 00:26:07.827 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:07.827 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:07.827 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:26:07.827 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:26:07.827 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:07.827 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:07.827 13:37:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:26:07.827 00:26:07.827 real 0m3.746s 00:26:07.827 user 0m4.853s 00:26:07.827 sys 0m0.661s 00:26:07.827 ************************************ 00:26:07.827 END TEST raid_read_error_test 00:26:07.827 ************************************ 00:26:07.827 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:07.827 13:37:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.087 13:37:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:26:08.087 13:37:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:08.087 13:37:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:08.087 13:37:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:08.087 ************************************ 00:26:08.087 START TEST raid_write_error_test 00:26:08.087 ************************************ 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wasNC4NOz7 00:26:08.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80058 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80058 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80058 ']' 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:08.087 13:37:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.087 [2024-10-28 13:37:22.173658] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:26:08.087 [2024-10-28 13:37:22.174054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80058 ] 00:26:08.346 [2024-10-28 13:37:22.327711] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:08.346 [2024-10-28 13:37:22.356818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.346 [2024-10-28 13:37:22.426585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.604 [2024-10-28 13:37:22.504571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:08.604 [2024-10-28 13:37:22.504934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.173 BaseBdev1_malloc 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.173 true 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.173 [2024-10-28 13:37:23.173799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:09.173 [2024-10-28 13:37:23.174275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:09.173 [2024-10-28 13:37:23.174329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:09.173 [2024-10-28 13:37:23.174358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:09.173 [2024-10-28 13:37:23.177743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:09.173 BaseBdev1 00:26:09.173 [2024-10-28 13:37:23.177952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.173 BaseBdev2_malloc 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.173 true 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.173 [2024-10-28 13:37:23.209691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:09.173 [2024-10-28 13:37:23.210101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:09.173 [2024-10-28 13:37:23.210162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:09.173 [2024-10-28 13:37:23.210192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:09.173 [2024-10-28 13:37:23.213360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:09.173 [2024-10-28 13:37:23.213419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:09.173 BaseBdev2 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.173 BaseBdev3_malloc 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.173 true 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.173 [2024-10-28 13:37:23.245528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:09.173 [2024-10-28 13:37:23.245626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:09.173 [2024-10-28 13:37:23.245662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:09.173 [2024-10-28 13:37:23.245685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:09.173 [2024-10-28 13:37:23.248838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:09.173 BaseBdev3 00:26:09.173 [2024-10-28 13:37:23.249207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.173 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.174 [2024-10-28 13:37:23.253624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:09.174 [2024-10-28 13:37:23.256554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:09.174 [2024-10-28 13:37:23.256885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:09.174 [2024-10-28 13:37:23.257237] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:09.174 [2024-10-28 13:37:23.257260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:09.174 [2024-10-28 13:37:23.257655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:09.174 [2024-10-28 13:37:23.257890] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:09.174 [2024-10-28 13:37:23.257916] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:09.174 [2024-10-28 13:37:23.258241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:09.174 "name": "raid_bdev1", 00:26:09.174 "uuid": "046f1cc1-9f32-4cd9-8be1-2caaeb3dcc78", 00:26:09.174 "strip_size_kb": 64, 00:26:09.174 "state": "online", 00:26:09.174 "raid_level": "concat", 00:26:09.174 "superblock": true, 00:26:09.174 "num_base_bdevs": 3, 00:26:09.174 "num_base_bdevs_discovered": 3, 00:26:09.174 "num_base_bdevs_operational": 3, 00:26:09.174 "base_bdevs_list": [ 00:26:09.174 { 00:26:09.174 "name": "BaseBdev1", 00:26:09.174 "uuid": "8f2b8c20-48c7-5384-8147-5648ee2fb899", 00:26:09.174 "is_configured": true, 00:26:09.174 "data_offset": 2048, 00:26:09.174 "data_size": 63488 00:26:09.174 }, 00:26:09.174 { 00:26:09.174 "name": "BaseBdev2", 00:26:09.174 "uuid": "84331799-a7f2-5e37-ab58-e7b5c4842018", 00:26:09.174 "is_configured": true, 00:26:09.174 "data_offset": 2048, 00:26:09.174 "data_size": 63488 00:26:09.174 }, 00:26:09.174 { 00:26:09.174 "name": "BaseBdev3", 00:26:09.174 "uuid": "a633fd48-3d03-5587-8265-e9145a551222", 00:26:09.174 "is_configured": true, 00:26:09.174 "data_offset": 2048, 00:26:09.174 "data_size": 63488 00:26:09.174 } 00:26:09.174 ] 00:26:09.174 }' 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:09.174 13:37:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.740 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:09.740 13:37:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:09.740 [2024-10-28 13:37:23.879044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:10.724 "name": "raid_bdev1", 00:26:10.724 "uuid": "046f1cc1-9f32-4cd9-8be1-2caaeb3dcc78", 00:26:10.724 "strip_size_kb": 64, 00:26:10.724 "state": "online", 00:26:10.724 "raid_level": "concat", 00:26:10.724 "superblock": true, 00:26:10.724 "num_base_bdevs": 3, 00:26:10.724 "num_base_bdevs_discovered": 3, 00:26:10.724 "num_base_bdevs_operational": 3, 00:26:10.724 "base_bdevs_list": [ 00:26:10.724 { 00:26:10.724 "name": "BaseBdev1", 00:26:10.724 "uuid": "8f2b8c20-48c7-5384-8147-5648ee2fb899", 00:26:10.724 "is_configured": true, 00:26:10.724 "data_offset": 2048, 00:26:10.724 "data_size": 63488 00:26:10.724 }, 00:26:10.724 { 00:26:10.724 "name": "BaseBdev2", 00:26:10.724 "uuid": "84331799-a7f2-5e37-ab58-e7b5c4842018", 00:26:10.724 "is_configured": true, 00:26:10.724 "data_offset": 2048, 00:26:10.724 "data_size": 63488 00:26:10.724 }, 00:26:10.724 { 00:26:10.724 "name": "BaseBdev3", 00:26:10.724 "uuid": "a633fd48-3d03-5587-8265-e9145a551222", 00:26:10.724 "is_configured": true, 00:26:10.724 "data_offset": 2048, 00:26:10.724 "data_size": 63488 00:26:10.724 } 00:26:10.724 ] 00:26:10.724 }' 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:10.724 13:37:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.291 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:11.291 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.291 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.291 [2024-10-28 13:37:25.309593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:11.292 [2024-10-28 13:37:25.309677] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:11.292 [2024-10-28 13:37:25.313006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:11.292 { 00:26:11.292 "results": [ 00:26:11.292 { 00:26:11.292 "job": "raid_bdev1", 00:26:11.292 "core_mask": "0x1", 00:26:11.292 "workload": "randrw", 00:26:11.292 "percentage": 50, 00:26:11.292 "status": "finished", 00:26:11.292 "queue_depth": 1, 00:26:11.292 "io_size": 131072, 00:26:11.292 "runtime": 1.427849, 00:26:11.292 "iops": 9541.62519986357, 00:26:11.292 "mibps": 1192.7031499829463, 00:26:11.292 "io_failed": 1, 00:26:11.292 "io_timeout": 0, 00:26:11.292 "avg_latency_us": 147.22734652210175, 00:26:11.292 "min_latency_us": 45.847272727272724, 00:26:11.292 "max_latency_us": 1921.3963636363637 00:26:11.292 } 00:26:11.292 ], 00:26:11.292 "core_count": 1 00:26:11.292 } 00:26:11.292 [2024-10-28 13:37:25.313430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:11.292 [2024-10-28 13:37:25.313517] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:11.292 [2024-10-28 13:37:25.313547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:11.292 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.292 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80058 00:26:11.292 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80058 ']' 00:26:11.292 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80058 00:26:11.292 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:26:11.292 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:11.292 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80058 00:26:11.292 killing process with pid 80058 00:26:11.292 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:11.292 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:11.292 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80058' 00:26:11.292 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80058 00:26:11.292 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80058 00:26:11.292 [2024-10-28 13:37:25.349353] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:11.292 [2024-10-28 13:37:25.398427] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:11.859 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:11.859 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wasNC4NOz7 00:26:11.859 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:11.859 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:26:11.859 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:26:11.859 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:11.859 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:11.859 13:37:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:26:11.859 00:26:11.859 real 0m3.704s 00:26:11.859 user 0m4.825s 00:26:11.859 sys 0m0.619s 00:26:11.859 ************************************ 00:26:11.859 END TEST raid_write_error_test 00:26:11.859 ************************************ 00:26:11.859 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:11.859 13:37:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.859 13:37:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:26:11.859 13:37:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:26:11.859 13:37:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:11.859 13:37:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:11.859 13:37:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:11.859 ************************************ 00:26:11.859 START TEST raid_state_function_test 00:26:11.859 ************************************ 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:11.859 Process raid pid: 80196 00:26:11.859 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:11.860 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:26:11.860 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:26:11.860 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:26:11.860 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:26:11.860 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80196 00:26:11.860 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80196' 00:26:11.860 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:11.860 13:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80196 00:26:11.860 13:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80196 ']' 00:26:11.860 13:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.860 13:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:11.860 13:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.860 13:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:11.860 13:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.860 [2024-10-28 13:37:25.868278] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:26:11.860 [2024-10-28 13:37:25.868710] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.119 [2024-10-28 13:37:26.015117] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:12.119 [2024-10-28 13:37:26.051391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.119 [2024-10-28 13:37:26.123208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.119 [2024-10-28 13:37:26.200448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:12.119 [2024-10-28 13:37:26.200501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.054 [2024-10-28 13:37:26.900192] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:13.054 [2024-10-28 13:37:26.900546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:13.054 [2024-10-28 13:37:26.900682] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:13.054 [2024-10-28 13:37:26.900809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:13.054 [2024-10-28 13:37:26.900842] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:13.054 [2024-10-28 13:37:26.900858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:13.054 "name": "Existed_Raid", 00:26:13.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.054 "strip_size_kb": 0, 00:26:13.054 "state": "configuring", 00:26:13.054 "raid_level": "raid1", 00:26:13.054 "superblock": false, 00:26:13.054 "num_base_bdevs": 3, 00:26:13.054 "num_base_bdevs_discovered": 0, 00:26:13.054 "num_base_bdevs_operational": 3, 00:26:13.054 "base_bdevs_list": [ 00:26:13.054 { 00:26:13.054 "name": "BaseBdev1", 00:26:13.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.054 "is_configured": false, 00:26:13.054 "data_offset": 0, 00:26:13.054 "data_size": 0 00:26:13.054 }, 00:26:13.054 { 00:26:13.054 "name": "BaseBdev2", 00:26:13.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.054 "is_configured": false, 00:26:13.054 "data_offset": 0, 00:26:13.054 "data_size": 0 00:26:13.054 }, 00:26:13.054 { 00:26:13.054 "name": "BaseBdev3", 00:26:13.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.054 "is_configured": false, 00:26:13.054 "data_offset": 0, 00:26:13.054 "data_size": 0 00:26:13.054 } 00:26:13.054 ] 00:26:13.054 }' 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:13.054 13:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.313 [2024-10-28 13:37:27.420211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:13.313 [2024-10-28 13:37:27.420278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.313 [2024-10-28 13:37:27.428261] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:13.313 [2024-10-28 13:37:27.428601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:13.313 [2024-10-28 13:37:27.428734] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:13.313 [2024-10-28 13:37:27.428890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:13.313 [2024-10-28 13:37:27.429008] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:13.313 [2024-10-28 13:37:27.429042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.313 [2024-10-28 13:37:27.451544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:13.313 BaseBdev1 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.313 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.572 [ 00:26:13.572 { 00:26:13.572 "name": "BaseBdev1", 00:26:13.572 "aliases": [ 00:26:13.572 "52e1909e-1423-43d9-870e-f4775e314b0d" 00:26:13.572 ], 00:26:13.572 "product_name": "Malloc disk", 00:26:13.572 "block_size": 512, 00:26:13.572 "num_blocks": 65536, 00:26:13.572 "uuid": "52e1909e-1423-43d9-870e-f4775e314b0d", 00:26:13.572 "assigned_rate_limits": { 00:26:13.572 "rw_ios_per_sec": 0, 00:26:13.572 "rw_mbytes_per_sec": 0, 00:26:13.572 "r_mbytes_per_sec": 0, 00:26:13.572 "w_mbytes_per_sec": 0 00:26:13.572 }, 00:26:13.572 "claimed": true, 00:26:13.572 "claim_type": "exclusive_write", 00:26:13.572 "zoned": false, 00:26:13.572 "supported_io_types": { 00:26:13.572 "read": true, 00:26:13.572 "write": true, 00:26:13.572 "unmap": true, 00:26:13.572 "flush": true, 00:26:13.572 "reset": true, 00:26:13.572 "nvme_admin": false, 00:26:13.572 "nvme_io": false, 00:26:13.572 "nvme_io_md": false, 00:26:13.572 "write_zeroes": true, 00:26:13.572 "zcopy": true, 00:26:13.572 "get_zone_info": false, 00:26:13.572 "zone_management": false, 00:26:13.572 "zone_append": false, 00:26:13.572 "compare": false, 00:26:13.572 "compare_and_write": false, 00:26:13.572 "abort": true, 00:26:13.572 "seek_hole": false, 00:26:13.572 "seek_data": false, 00:26:13.572 "copy": true, 00:26:13.572 "nvme_iov_md": false 00:26:13.572 }, 00:26:13.572 "memory_domains": [ 00:26:13.572 { 00:26:13.572 "dma_device_id": "system", 00:26:13.572 "dma_device_type": 1 00:26:13.572 }, 00:26:13.572 { 00:26:13.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.572 "dma_device_type": 2 00:26:13.572 } 00:26:13.572 ], 00:26:13.572 "driver_specific": {} 00:26:13.572 } 00:26:13.572 ] 00:26:13.572 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.572 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:13.572 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:13.572 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:13.573 "name": "Existed_Raid", 00:26:13.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.573 "strip_size_kb": 0, 00:26:13.573 "state": "configuring", 00:26:13.573 "raid_level": "raid1", 00:26:13.573 "superblock": false, 00:26:13.573 "num_base_bdevs": 3, 00:26:13.573 "num_base_bdevs_discovered": 1, 00:26:13.573 "num_base_bdevs_operational": 3, 00:26:13.573 "base_bdevs_list": [ 00:26:13.573 { 00:26:13.573 "name": "BaseBdev1", 00:26:13.573 "uuid": "52e1909e-1423-43d9-870e-f4775e314b0d", 00:26:13.573 "is_configured": true, 00:26:13.573 "data_offset": 0, 00:26:13.573 "data_size": 65536 00:26:13.573 }, 00:26:13.573 { 00:26:13.573 "name": "BaseBdev2", 00:26:13.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.573 "is_configured": false, 00:26:13.573 "data_offset": 0, 00:26:13.573 "data_size": 0 00:26:13.573 }, 00:26:13.573 { 00:26:13.573 "name": "BaseBdev3", 00:26:13.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.573 "is_configured": false, 00:26:13.573 "data_offset": 0, 00:26:13.573 "data_size": 0 00:26:13.573 } 00:26:13.573 ] 00:26:13.573 }' 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:13.573 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.140 13:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:14.140 13:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.140 [2024-10-28 13:37:28.007767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:14.140 [2024-10-28 13:37:28.007898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.140 [2024-10-28 13:37:28.015743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:14.140 [2024-10-28 13:37:28.018445] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:14.140 [2024-10-28 13:37:28.018512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:14.140 [2024-10-28 13:37:28.018534] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:14.140 [2024-10-28 13:37:28.018549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:14.140 "name": "Existed_Raid", 00:26:14.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.140 "strip_size_kb": 0, 00:26:14.140 "state": "configuring", 00:26:14.140 "raid_level": "raid1", 00:26:14.140 "superblock": false, 00:26:14.140 "num_base_bdevs": 3, 00:26:14.140 "num_base_bdevs_discovered": 1, 00:26:14.140 "num_base_bdevs_operational": 3, 00:26:14.140 "base_bdevs_list": [ 00:26:14.140 { 00:26:14.140 "name": "BaseBdev1", 00:26:14.140 "uuid": "52e1909e-1423-43d9-870e-f4775e314b0d", 00:26:14.140 "is_configured": true, 00:26:14.140 "data_offset": 0, 00:26:14.140 "data_size": 65536 00:26:14.140 }, 00:26:14.140 { 00:26:14.140 "name": "BaseBdev2", 00:26:14.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.140 "is_configured": false, 00:26:14.140 "data_offset": 0, 00:26:14.140 "data_size": 0 00:26:14.140 }, 00:26:14.140 { 00:26:14.140 "name": "BaseBdev3", 00:26:14.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.140 "is_configured": false, 00:26:14.140 "data_offset": 0, 00:26:14.140 "data_size": 0 00:26:14.140 } 00:26:14.140 ] 00:26:14.140 }' 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:14.140 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.399 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:14.399 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.399 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.399 [2024-10-28 13:37:28.548483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:14.399 BaseBdev2 00:26:14.399 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.399 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:14.399 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:14.399 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:14.399 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:14.399 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:14.399 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:14.399 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:14.399 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.399 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.658 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.659 [ 00:26:14.659 { 00:26:14.659 "name": "BaseBdev2", 00:26:14.659 "aliases": [ 00:26:14.659 "d30a2de6-f970-4e41-8077-93eb353cb4f7" 00:26:14.659 ], 00:26:14.659 "product_name": "Malloc disk", 00:26:14.659 "block_size": 512, 00:26:14.659 "num_blocks": 65536, 00:26:14.659 "uuid": "d30a2de6-f970-4e41-8077-93eb353cb4f7", 00:26:14.659 "assigned_rate_limits": { 00:26:14.659 "rw_ios_per_sec": 0, 00:26:14.659 "rw_mbytes_per_sec": 0, 00:26:14.659 "r_mbytes_per_sec": 0, 00:26:14.659 "w_mbytes_per_sec": 0 00:26:14.659 }, 00:26:14.659 "claimed": true, 00:26:14.659 "claim_type": "exclusive_write", 00:26:14.659 "zoned": false, 00:26:14.659 "supported_io_types": { 00:26:14.659 "read": true, 00:26:14.659 "write": true, 00:26:14.659 "unmap": true, 00:26:14.659 "flush": true, 00:26:14.659 "reset": true, 00:26:14.659 "nvme_admin": false, 00:26:14.659 "nvme_io": false, 00:26:14.659 "nvme_io_md": false, 00:26:14.659 "write_zeroes": true, 00:26:14.659 "zcopy": true, 00:26:14.659 "get_zone_info": false, 00:26:14.659 "zone_management": false, 00:26:14.659 "zone_append": false, 00:26:14.659 "compare": false, 00:26:14.659 "compare_and_write": false, 00:26:14.659 "abort": true, 00:26:14.659 "seek_hole": false, 00:26:14.659 "seek_data": false, 00:26:14.659 "copy": true, 00:26:14.659 "nvme_iov_md": false 00:26:14.659 }, 00:26:14.659 "memory_domains": [ 00:26:14.659 { 00:26:14.659 "dma_device_id": "system", 00:26:14.659 "dma_device_type": 1 00:26:14.659 }, 00:26:14.659 { 00:26:14.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.659 "dma_device_type": 2 00:26:14.659 } 00:26:14.659 ], 00:26:14.659 "driver_specific": {} 00:26:14.659 } 00:26:14.659 ] 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:14.659 "name": "Existed_Raid", 00:26:14.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.659 "strip_size_kb": 0, 00:26:14.659 "state": "configuring", 00:26:14.659 "raid_level": "raid1", 00:26:14.659 "superblock": false, 00:26:14.659 "num_base_bdevs": 3, 00:26:14.659 "num_base_bdevs_discovered": 2, 00:26:14.659 "num_base_bdevs_operational": 3, 00:26:14.659 "base_bdevs_list": [ 00:26:14.659 { 00:26:14.659 "name": "BaseBdev1", 00:26:14.659 "uuid": "52e1909e-1423-43d9-870e-f4775e314b0d", 00:26:14.659 "is_configured": true, 00:26:14.659 "data_offset": 0, 00:26:14.659 "data_size": 65536 00:26:14.659 }, 00:26:14.659 { 00:26:14.659 "name": "BaseBdev2", 00:26:14.659 "uuid": "d30a2de6-f970-4e41-8077-93eb353cb4f7", 00:26:14.659 "is_configured": true, 00:26:14.659 "data_offset": 0, 00:26:14.659 "data_size": 65536 00:26:14.659 }, 00:26:14.659 { 00:26:14.659 "name": "BaseBdev3", 00:26:14.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.659 "is_configured": false, 00:26:14.659 "data_offset": 0, 00:26:14.659 "data_size": 0 00:26:14.659 } 00:26:14.659 ] 00:26:14.659 }' 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:14.659 13:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.228 [2024-10-28 13:37:29.130643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:15.228 [2024-10-28 13:37:29.130737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:26:15.228 [2024-10-28 13:37:29.130754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:15.228 [2024-10-28 13:37:29.131175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:15.228 [2024-10-28 13:37:29.131399] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:26:15.228 [2024-10-28 13:37:29.131431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:26:15.228 [2024-10-28 13:37:29.131763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:15.228 BaseBdev3 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.228 [ 00:26:15.228 { 00:26:15.228 "name": "BaseBdev3", 00:26:15.228 "aliases": [ 00:26:15.228 "761f0762-60ef-4dcf-ba97-751cbea116c9" 00:26:15.228 ], 00:26:15.228 "product_name": "Malloc disk", 00:26:15.228 "block_size": 512, 00:26:15.228 "num_blocks": 65536, 00:26:15.228 "uuid": "761f0762-60ef-4dcf-ba97-751cbea116c9", 00:26:15.228 "assigned_rate_limits": { 00:26:15.228 "rw_ios_per_sec": 0, 00:26:15.228 "rw_mbytes_per_sec": 0, 00:26:15.228 "r_mbytes_per_sec": 0, 00:26:15.228 "w_mbytes_per_sec": 0 00:26:15.228 }, 00:26:15.228 "claimed": true, 00:26:15.228 "claim_type": "exclusive_write", 00:26:15.228 "zoned": false, 00:26:15.228 "supported_io_types": { 00:26:15.228 "read": true, 00:26:15.228 "write": true, 00:26:15.228 "unmap": true, 00:26:15.228 "flush": true, 00:26:15.228 "reset": true, 00:26:15.228 "nvme_admin": false, 00:26:15.228 "nvme_io": false, 00:26:15.228 "nvme_io_md": false, 00:26:15.228 "write_zeroes": true, 00:26:15.228 "zcopy": true, 00:26:15.228 "get_zone_info": false, 00:26:15.228 "zone_management": false, 00:26:15.228 "zone_append": false, 00:26:15.228 "compare": false, 00:26:15.228 "compare_and_write": false, 00:26:15.228 "abort": true, 00:26:15.228 "seek_hole": false, 00:26:15.228 "seek_data": false, 00:26:15.228 "copy": true, 00:26:15.228 "nvme_iov_md": false 00:26:15.228 }, 00:26:15.228 "memory_domains": [ 00:26:15.228 { 00:26:15.228 "dma_device_id": "system", 00:26:15.228 "dma_device_type": 1 00:26:15.228 }, 00:26:15.228 { 00:26:15.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.228 "dma_device_type": 2 00:26:15.228 } 00:26:15.228 ], 00:26:15.228 "driver_specific": {} 00:26:15.228 } 00:26:15.228 ] 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:15.228 "name": "Existed_Raid", 00:26:15.228 "uuid": "a92ef2af-e6e0-4ee6-a01c-83110e17ebff", 00:26:15.228 "strip_size_kb": 0, 00:26:15.228 "state": "online", 00:26:15.228 "raid_level": "raid1", 00:26:15.228 "superblock": false, 00:26:15.228 "num_base_bdevs": 3, 00:26:15.228 "num_base_bdevs_discovered": 3, 00:26:15.228 "num_base_bdevs_operational": 3, 00:26:15.228 "base_bdevs_list": [ 00:26:15.228 { 00:26:15.228 "name": "BaseBdev1", 00:26:15.228 "uuid": "52e1909e-1423-43d9-870e-f4775e314b0d", 00:26:15.228 "is_configured": true, 00:26:15.228 "data_offset": 0, 00:26:15.228 "data_size": 65536 00:26:15.228 }, 00:26:15.228 { 00:26:15.228 "name": "BaseBdev2", 00:26:15.228 "uuid": "d30a2de6-f970-4e41-8077-93eb353cb4f7", 00:26:15.228 "is_configured": true, 00:26:15.228 "data_offset": 0, 00:26:15.228 "data_size": 65536 00:26:15.228 }, 00:26:15.228 { 00:26:15.228 "name": "BaseBdev3", 00:26:15.228 "uuid": "761f0762-60ef-4dcf-ba97-751cbea116c9", 00:26:15.228 "is_configured": true, 00:26:15.228 "data_offset": 0, 00:26:15.228 "data_size": 65536 00:26:15.228 } 00:26:15.228 ] 00:26:15.228 }' 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:15.228 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.794 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:15.794 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:15.794 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:15.794 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:15.794 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:15.794 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:15.794 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:15.794 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.794 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.795 [2024-10-28 13:37:29.691289] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:15.795 "name": "Existed_Raid", 00:26:15.795 "aliases": [ 00:26:15.795 "a92ef2af-e6e0-4ee6-a01c-83110e17ebff" 00:26:15.795 ], 00:26:15.795 "product_name": "Raid Volume", 00:26:15.795 "block_size": 512, 00:26:15.795 "num_blocks": 65536, 00:26:15.795 "uuid": "a92ef2af-e6e0-4ee6-a01c-83110e17ebff", 00:26:15.795 "assigned_rate_limits": { 00:26:15.795 "rw_ios_per_sec": 0, 00:26:15.795 "rw_mbytes_per_sec": 0, 00:26:15.795 "r_mbytes_per_sec": 0, 00:26:15.795 "w_mbytes_per_sec": 0 00:26:15.795 }, 00:26:15.795 "claimed": false, 00:26:15.795 "zoned": false, 00:26:15.795 "supported_io_types": { 00:26:15.795 "read": true, 00:26:15.795 "write": true, 00:26:15.795 "unmap": false, 00:26:15.795 "flush": false, 00:26:15.795 "reset": true, 00:26:15.795 "nvme_admin": false, 00:26:15.795 "nvme_io": false, 00:26:15.795 "nvme_io_md": false, 00:26:15.795 "write_zeroes": true, 00:26:15.795 "zcopy": false, 00:26:15.795 "get_zone_info": false, 00:26:15.795 "zone_management": false, 00:26:15.795 "zone_append": false, 00:26:15.795 "compare": false, 00:26:15.795 "compare_and_write": false, 00:26:15.795 "abort": false, 00:26:15.795 "seek_hole": false, 00:26:15.795 "seek_data": false, 00:26:15.795 "copy": false, 00:26:15.795 "nvme_iov_md": false 00:26:15.795 }, 00:26:15.795 "memory_domains": [ 00:26:15.795 { 00:26:15.795 "dma_device_id": "system", 00:26:15.795 "dma_device_type": 1 00:26:15.795 }, 00:26:15.795 { 00:26:15.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.795 "dma_device_type": 2 00:26:15.795 }, 00:26:15.795 { 00:26:15.795 "dma_device_id": "system", 00:26:15.795 "dma_device_type": 1 00:26:15.795 }, 00:26:15.795 { 00:26:15.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.795 "dma_device_type": 2 00:26:15.795 }, 00:26:15.795 { 00:26:15.795 "dma_device_id": "system", 00:26:15.795 "dma_device_type": 1 00:26:15.795 }, 00:26:15.795 { 00:26:15.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.795 "dma_device_type": 2 00:26:15.795 } 00:26:15.795 ], 00:26:15.795 "driver_specific": { 00:26:15.795 "raid": { 00:26:15.795 "uuid": "a92ef2af-e6e0-4ee6-a01c-83110e17ebff", 00:26:15.795 "strip_size_kb": 0, 00:26:15.795 "state": "online", 00:26:15.795 "raid_level": "raid1", 00:26:15.795 "superblock": false, 00:26:15.795 "num_base_bdevs": 3, 00:26:15.795 "num_base_bdevs_discovered": 3, 00:26:15.795 "num_base_bdevs_operational": 3, 00:26:15.795 "base_bdevs_list": [ 00:26:15.795 { 00:26:15.795 "name": "BaseBdev1", 00:26:15.795 "uuid": "52e1909e-1423-43d9-870e-f4775e314b0d", 00:26:15.795 "is_configured": true, 00:26:15.795 "data_offset": 0, 00:26:15.795 "data_size": 65536 00:26:15.795 }, 00:26:15.795 { 00:26:15.795 "name": "BaseBdev2", 00:26:15.795 "uuid": "d30a2de6-f970-4e41-8077-93eb353cb4f7", 00:26:15.795 "is_configured": true, 00:26:15.795 "data_offset": 0, 00:26:15.795 "data_size": 65536 00:26:15.795 }, 00:26:15.795 { 00:26:15.795 "name": "BaseBdev3", 00:26:15.795 "uuid": "761f0762-60ef-4dcf-ba97-751cbea116c9", 00:26:15.795 "is_configured": true, 00:26:15.795 "data_offset": 0, 00:26:15.795 "data_size": 65536 00:26:15.795 } 00:26:15.795 ] 00:26:15.795 } 00:26:15.795 } 00:26:15.795 }' 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:15.795 BaseBdev2 00:26:15.795 BaseBdev3' 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.795 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.053 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:16.053 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:16.053 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:16.053 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:16.053 13:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:16.053 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.053 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.053 13:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.053 [2024-10-28 13:37:30.019162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.053 "name": "Existed_Raid", 00:26:16.053 "uuid": "a92ef2af-e6e0-4ee6-a01c-83110e17ebff", 00:26:16.053 "strip_size_kb": 0, 00:26:16.053 "state": "online", 00:26:16.053 "raid_level": "raid1", 00:26:16.053 "superblock": false, 00:26:16.053 "num_base_bdevs": 3, 00:26:16.053 "num_base_bdevs_discovered": 2, 00:26:16.053 "num_base_bdevs_operational": 2, 00:26:16.053 "base_bdevs_list": [ 00:26:16.053 { 00:26:16.053 "name": null, 00:26:16.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.053 "is_configured": false, 00:26:16.053 "data_offset": 0, 00:26:16.053 "data_size": 65536 00:26:16.053 }, 00:26:16.053 { 00:26:16.053 "name": "BaseBdev2", 00:26:16.053 "uuid": "d30a2de6-f970-4e41-8077-93eb353cb4f7", 00:26:16.053 "is_configured": true, 00:26:16.053 "data_offset": 0, 00:26:16.053 "data_size": 65536 00:26:16.053 }, 00:26:16.053 { 00:26:16.053 "name": "BaseBdev3", 00:26:16.053 "uuid": "761f0762-60ef-4dcf-ba97-751cbea116c9", 00:26:16.053 "is_configured": true, 00:26:16.053 "data_offset": 0, 00:26:16.053 "data_size": 65536 00:26:16.053 } 00:26:16.053 ] 00:26:16.053 }' 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.053 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.621 [2024-10-28 13:37:30.631430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.621 [2024-10-28 13:37:30.711081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:16.621 [2024-10-28 13:37:30.711269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:16.621 [2024-10-28 13:37:30.731683] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:16.621 [2024-10-28 13:37:30.731780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:16.621 [2024-10-28 13:37:30.731802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.621 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.881 BaseBdev2 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.881 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.881 [ 00:26:16.881 { 00:26:16.881 "name": "BaseBdev2", 00:26:16.881 "aliases": [ 00:26:16.881 "456e2df1-8323-4d60-b475-bf12b4f056f5" 00:26:16.881 ], 00:26:16.881 "product_name": "Malloc disk", 00:26:16.881 "block_size": 512, 00:26:16.881 "num_blocks": 65536, 00:26:16.881 "uuid": "456e2df1-8323-4d60-b475-bf12b4f056f5", 00:26:16.881 "assigned_rate_limits": { 00:26:16.881 "rw_ios_per_sec": 0, 00:26:16.881 "rw_mbytes_per_sec": 0, 00:26:16.881 "r_mbytes_per_sec": 0, 00:26:16.881 "w_mbytes_per_sec": 0 00:26:16.881 }, 00:26:16.881 "claimed": false, 00:26:16.881 "zoned": false, 00:26:16.881 "supported_io_types": { 00:26:16.881 "read": true, 00:26:16.882 "write": true, 00:26:16.882 "unmap": true, 00:26:16.882 "flush": true, 00:26:16.882 "reset": true, 00:26:16.882 "nvme_admin": false, 00:26:16.882 "nvme_io": false, 00:26:16.882 "nvme_io_md": false, 00:26:16.882 "write_zeroes": true, 00:26:16.882 "zcopy": true, 00:26:16.882 "get_zone_info": false, 00:26:16.882 "zone_management": false, 00:26:16.882 "zone_append": false, 00:26:16.882 "compare": false, 00:26:16.882 "compare_and_write": false, 00:26:16.882 "abort": true, 00:26:16.882 "seek_hole": false, 00:26:16.882 "seek_data": false, 00:26:16.882 "copy": true, 00:26:16.882 "nvme_iov_md": false 00:26:16.882 }, 00:26:16.882 "memory_domains": [ 00:26:16.882 { 00:26:16.882 "dma_device_id": "system", 00:26:16.882 "dma_device_type": 1 00:26:16.882 }, 00:26:16.882 { 00:26:16.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.882 "dma_device_type": 2 00:26:16.882 } 00:26:16.882 ], 00:26:16.882 "driver_specific": {} 00:26:16.882 } 00:26:16.882 ] 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.882 BaseBdev3 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.882 [ 00:26:16.882 { 00:26:16.882 "name": "BaseBdev3", 00:26:16.882 "aliases": [ 00:26:16.882 "61e9e5f3-1f4a-417d-86e8-dd795811d913" 00:26:16.882 ], 00:26:16.882 "product_name": "Malloc disk", 00:26:16.882 "block_size": 512, 00:26:16.882 "num_blocks": 65536, 00:26:16.882 "uuid": "61e9e5f3-1f4a-417d-86e8-dd795811d913", 00:26:16.882 "assigned_rate_limits": { 00:26:16.882 "rw_ios_per_sec": 0, 00:26:16.882 "rw_mbytes_per_sec": 0, 00:26:16.882 "r_mbytes_per_sec": 0, 00:26:16.882 "w_mbytes_per_sec": 0 00:26:16.882 }, 00:26:16.882 "claimed": false, 00:26:16.882 "zoned": false, 00:26:16.882 "supported_io_types": { 00:26:16.882 "read": true, 00:26:16.882 "write": true, 00:26:16.882 "unmap": true, 00:26:16.882 "flush": true, 00:26:16.882 "reset": true, 00:26:16.882 "nvme_admin": false, 00:26:16.882 "nvme_io": false, 00:26:16.882 "nvme_io_md": false, 00:26:16.882 "write_zeroes": true, 00:26:16.882 "zcopy": true, 00:26:16.882 "get_zone_info": false, 00:26:16.882 "zone_management": false, 00:26:16.882 "zone_append": false, 00:26:16.882 "compare": false, 00:26:16.882 "compare_and_write": false, 00:26:16.882 "abort": true, 00:26:16.882 "seek_hole": false, 00:26:16.882 "seek_data": false, 00:26:16.882 "copy": true, 00:26:16.882 "nvme_iov_md": false 00:26:16.882 }, 00:26:16.882 "memory_domains": [ 00:26:16.882 { 00:26:16.882 "dma_device_id": "system", 00:26:16.882 "dma_device_type": 1 00:26:16.882 }, 00:26:16.882 { 00:26:16.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.882 "dma_device_type": 2 00:26:16.882 } 00:26:16.882 ], 00:26:16.882 "driver_specific": {} 00:26:16.882 } 00:26:16.882 ] 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.882 [2024-10-28 13:37:30.902270] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:16.882 [2024-10-28 13:37:30.902382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:16.882 [2024-10-28 13:37:30.902427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:16.882 [2024-10-28 13:37:30.905299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.882 "name": "Existed_Raid", 00:26:16.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.882 "strip_size_kb": 0, 00:26:16.882 "state": "configuring", 00:26:16.882 "raid_level": "raid1", 00:26:16.882 "superblock": false, 00:26:16.882 "num_base_bdevs": 3, 00:26:16.882 "num_base_bdevs_discovered": 2, 00:26:16.882 "num_base_bdevs_operational": 3, 00:26:16.882 "base_bdevs_list": [ 00:26:16.882 { 00:26:16.882 "name": "BaseBdev1", 00:26:16.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.882 "is_configured": false, 00:26:16.882 "data_offset": 0, 00:26:16.882 "data_size": 0 00:26:16.882 }, 00:26:16.882 { 00:26:16.882 "name": "BaseBdev2", 00:26:16.882 "uuid": "456e2df1-8323-4d60-b475-bf12b4f056f5", 00:26:16.882 "is_configured": true, 00:26:16.882 "data_offset": 0, 00:26:16.882 "data_size": 65536 00:26:16.882 }, 00:26:16.882 { 00:26:16.882 "name": "BaseBdev3", 00:26:16.882 "uuid": "61e9e5f3-1f4a-417d-86e8-dd795811d913", 00:26:16.882 "is_configured": true, 00:26:16.882 "data_offset": 0, 00:26:16.882 "data_size": 65536 00:26:16.882 } 00:26:16.882 ] 00:26:16.882 }' 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.882 13:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.450 [2024-10-28 13:37:31.450387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:17.450 "name": "Existed_Raid", 00:26:17.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.450 "strip_size_kb": 0, 00:26:17.450 "state": "configuring", 00:26:17.450 "raid_level": "raid1", 00:26:17.450 "superblock": false, 00:26:17.450 "num_base_bdevs": 3, 00:26:17.450 "num_base_bdevs_discovered": 1, 00:26:17.450 "num_base_bdevs_operational": 3, 00:26:17.450 "base_bdevs_list": [ 00:26:17.450 { 00:26:17.450 "name": "BaseBdev1", 00:26:17.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.450 "is_configured": false, 00:26:17.450 "data_offset": 0, 00:26:17.450 "data_size": 0 00:26:17.450 }, 00:26:17.450 { 00:26:17.450 "name": null, 00:26:17.450 "uuid": "456e2df1-8323-4d60-b475-bf12b4f056f5", 00:26:17.450 "is_configured": false, 00:26:17.450 "data_offset": 0, 00:26:17.450 "data_size": 65536 00:26:17.450 }, 00:26:17.450 { 00:26:17.450 "name": "BaseBdev3", 00:26:17.450 "uuid": "61e9e5f3-1f4a-417d-86e8-dd795811d913", 00:26:17.450 "is_configured": true, 00:26:17.450 "data_offset": 0, 00:26:17.450 "data_size": 65536 00:26:17.450 } 00:26:17.450 ] 00:26:17.450 }' 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:17.450 13:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.018 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.018 13:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.018 13:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.018 13:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:18.018 13:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.018 [2024-10-28 13:37:32.046911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:18.018 BaseBdev1 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.018 [ 00:26:18.018 { 00:26:18.018 "name": "BaseBdev1", 00:26:18.018 "aliases": [ 00:26:18.018 "25c8cc5d-4a5d-4e5a-accf-2db54f0031ac" 00:26:18.018 ], 00:26:18.018 "product_name": "Malloc disk", 00:26:18.018 "block_size": 512, 00:26:18.018 "num_blocks": 65536, 00:26:18.018 "uuid": "25c8cc5d-4a5d-4e5a-accf-2db54f0031ac", 00:26:18.018 "assigned_rate_limits": { 00:26:18.018 "rw_ios_per_sec": 0, 00:26:18.018 "rw_mbytes_per_sec": 0, 00:26:18.018 "r_mbytes_per_sec": 0, 00:26:18.018 "w_mbytes_per_sec": 0 00:26:18.018 }, 00:26:18.018 "claimed": true, 00:26:18.018 "claim_type": "exclusive_write", 00:26:18.018 "zoned": false, 00:26:18.018 "supported_io_types": { 00:26:18.018 "read": true, 00:26:18.018 "write": true, 00:26:18.018 "unmap": true, 00:26:18.018 "flush": true, 00:26:18.018 "reset": true, 00:26:18.018 "nvme_admin": false, 00:26:18.018 "nvme_io": false, 00:26:18.018 "nvme_io_md": false, 00:26:18.018 "write_zeroes": true, 00:26:18.018 "zcopy": true, 00:26:18.018 "get_zone_info": false, 00:26:18.018 "zone_management": false, 00:26:18.018 "zone_append": false, 00:26:18.018 "compare": false, 00:26:18.018 "compare_and_write": false, 00:26:18.018 "abort": true, 00:26:18.018 "seek_hole": false, 00:26:18.018 "seek_data": false, 00:26:18.018 "copy": true, 00:26:18.018 "nvme_iov_md": false 00:26:18.018 }, 00:26:18.018 "memory_domains": [ 00:26:18.018 { 00:26:18.018 "dma_device_id": "system", 00:26:18.018 "dma_device_type": 1 00:26:18.018 }, 00:26:18.018 { 00:26:18.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.018 "dma_device_type": 2 00:26:18.018 } 00:26:18.018 ], 00:26:18.018 "driver_specific": {} 00:26:18.018 } 00:26:18.018 ] 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.018 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.019 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.019 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.019 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:18.019 "name": "Existed_Raid", 00:26:18.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.019 "strip_size_kb": 0, 00:26:18.019 "state": "configuring", 00:26:18.019 "raid_level": "raid1", 00:26:18.019 "superblock": false, 00:26:18.019 "num_base_bdevs": 3, 00:26:18.019 "num_base_bdevs_discovered": 2, 00:26:18.019 "num_base_bdevs_operational": 3, 00:26:18.019 "base_bdevs_list": [ 00:26:18.019 { 00:26:18.019 "name": "BaseBdev1", 00:26:18.019 "uuid": "25c8cc5d-4a5d-4e5a-accf-2db54f0031ac", 00:26:18.019 "is_configured": true, 00:26:18.019 "data_offset": 0, 00:26:18.019 "data_size": 65536 00:26:18.019 }, 00:26:18.019 { 00:26:18.019 "name": null, 00:26:18.019 "uuid": "456e2df1-8323-4d60-b475-bf12b4f056f5", 00:26:18.019 "is_configured": false, 00:26:18.019 "data_offset": 0, 00:26:18.019 "data_size": 65536 00:26:18.019 }, 00:26:18.019 { 00:26:18.019 "name": "BaseBdev3", 00:26:18.019 "uuid": "61e9e5f3-1f4a-417d-86e8-dd795811d913", 00:26:18.019 "is_configured": true, 00:26:18.019 "data_offset": 0, 00:26:18.019 "data_size": 65536 00:26:18.019 } 00:26:18.019 ] 00:26:18.019 }' 00:26:18.019 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:18.019 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.587 [2024-10-28 13:37:32.663213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:18.587 "name": "Existed_Raid", 00:26:18.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.587 "strip_size_kb": 0, 00:26:18.587 "state": "configuring", 00:26:18.587 "raid_level": "raid1", 00:26:18.587 "superblock": false, 00:26:18.587 "num_base_bdevs": 3, 00:26:18.587 "num_base_bdevs_discovered": 1, 00:26:18.587 "num_base_bdevs_operational": 3, 00:26:18.587 "base_bdevs_list": [ 00:26:18.587 { 00:26:18.587 "name": "BaseBdev1", 00:26:18.587 "uuid": "25c8cc5d-4a5d-4e5a-accf-2db54f0031ac", 00:26:18.587 "is_configured": true, 00:26:18.587 "data_offset": 0, 00:26:18.587 "data_size": 65536 00:26:18.587 }, 00:26:18.587 { 00:26:18.587 "name": null, 00:26:18.587 "uuid": "456e2df1-8323-4d60-b475-bf12b4f056f5", 00:26:18.587 "is_configured": false, 00:26:18.587 "data_offset": 0, 00:26:18.587 "data_size": 65536 00:26:18.587 }, 00:26:18.587 { 00:26:18.587 "name": null, 00:26:18.587 "uuid": "61e9e5f3-1f4a-417d-86e8-dd795811d913", 00:26:18.587 "is_configured": false, 00:26:18.587 "data_offset": 0, 00:26:18.587 "data_size": 65536 00:26:18.587 } 00:26:18.587 ] 00:26:18.587 }' 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:18.587 13:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.155 [2024-10-28 13:37:33.263391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.155 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.413 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:19.413 "name": "Existed_Raid", 00:26:19.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.413 "strip_size_kb": 0, 00:26:19.413 "state": "configuring", 00:26:19.413 "raid_level": "raid1", 00:26:19.413 "superblock": false, 00:26:19.413 "num_base_bdevs": 3, 00:26:19.413 "num_base_bdevs_discovered": 2, 00:26:19.414 "num_base_bdevs_operational": 3, 00:26:19.414 "base_bdevs_list": [ 00:26:19.414 { 00:26:19.414 "name": "BaseBdev1", 00:26:19.414 "uuid": "25c8cc5d-4a5d-4e5a-accf-2db54f0031ac", 00:26:19.414 "is_configured": true, 00:26:19.414 "data_offset": 0, 00:26:19.414 "data_size": 65536 00:26:19.414 }, 00:26:19.414 { 00:26:19.414 "name": null, 00:26:19.414 "uuid": "456e2df1-8323-4d60-b475-bf12b4f056f5", 00:26:19.414 "is_configured": false, 00:26:19.414 "data_offset": 0, 00:26:19.414 "data_size": 65536 00:26:19.414 }, 00:26:19.414 { 00:26:19.414 "name": "BaseBdev3", 00:26:19.414 "uuid": "61e9e5f3-1f4a-417d-86e8-dd795811d913", 00:26:19.414 "is_configured": true, 00:26:19.414 "data_offset": 0, 00:26:19.414 "data_size": 65536 00:26:19.414 } 00:26:19.414 ] 00:26:19.414 }' 00:26:19.414 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:19.414 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.672 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.672 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.672 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.672 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:19.672 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.930 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:19.930 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:19.930 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.930 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.930 [2024-10-28 13:37:33.859609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:19.930 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.930 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:19.930 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:19.931 "name": "Existed_Raid", 00:26:19.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.931 "strip_size_kb": 0, 00:26:19.931 "state": "configuring", 00:26:19.931 "raid_level": "raid1", 00:26:19.931 "superblock": false, 00:26:19.931 "num_base_bdevs": 3, 00:26:19.931 "num_base_bdevs_discovered": 1, 00:26:19.931 "num_base_bdevs_operational": 3, 00:26:19.931 "base_bdevs_list": [ 00:26:19.931 { 00:26:19.931 "name": null, 00:26:19.931 "uuid": "25c8cc5d-4a5d-4e5a-accf-2db54f0031ac", 00:26:19.931 "is_configured": false, 00:26:19.931 "data_offset": 0, 00:26:19.931 "data_size": 65536 00:26:19.931 }, 00:26:19.931 { 00:26:19.931 "name": null, 00:26:19.931 "uuid": "456e2df1-8323-4d60-b475-bf12b4f056f5", 00:26:19.931 "is_configured": false, 00:26:19.931 "data_offset": 0, 00:26:19.931 "data_size": 65536 00:26:19.931 }, 00:26:19.931 { 00:26:19.931 "name": "BaseBdev3", 00:26:19.931 "uuid": "61e9e5f3-1f4a-417d-86e8-dd795811d913", 00:26:19.931 "is_configured": true, 00:26:19.931 "data_offset": 0, 00:26:19.931 "data_size": 65536 00:26:19.931 } 00:26:19.931 ] 00:26:19.931 }' 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:19.931 13:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.555 [2024-10-28 13:37:34.449649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:20.555 "name": "Existed_Raid", 00:26:20.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:20.555 "strip_size_kb": 0, 00:26:20.555 "state": "configuring", 00:26:20.555 "raid_level": "raid1", 00:26:20.555 "superblock": false, 00:26:20.555 "num_base_bdevs": 3, 00:26:20.555 "num_base_bdevs_discovered": 2, 00:26:20.555 "num_base_bdevs_operational": 3, 00:26:20.555 "base_bdevs_list": [ 00:26:20.555 { 00:26:20.555 "name": null, 00:26:20.555 "uuid": "25c8cc5d-4a5d-4e5a-accf-2db54f0031ac", 00:26:20.555 "is_configured": false, 00:26:20.555 "data_offset": 0, 00:26:20.555 "data_size": 65536 00:26:20.555 }, 00:26:20.555 { 00:26:20.555 "name": "BaseBdev2", 00:26:20.555 "uuid": "456e2df1-8323-4d60-b475-bf12b4f056f5", 00:26:20.555 "is_configured": true, 00:26:20.555 "data_offset": 0, 00:26:20.555 "data_size": 65536 00:26:20.555 }, 00:26:20.555 { 00:26:20.555 "name": "BaseBdev3", 00:26:20.555 "uuid": "61e9e5f3-1f4a-417d-86e8-dd795811d913", 00:26:20.555 "is_configured": true, 00:26:20.555 "data_offset": 0, 00:26:20.555 "data_size": 65536 00:26:20.555 } 00:26:20.555 ] 00:26:20.555 }' 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:20.555 13:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.123 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.123 13:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.123 13:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.123 13:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:21.123 13:37:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.123 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:21.123 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.123 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.123 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.123 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:21.123 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.123 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 25c8cc5d-4a5d-4e5a-accf-2db54f0031ac 00:26:21.123 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.123 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.123 [2024-10-28 13:37:35.098469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:21.123 NewBaseBdev 00:26:21.123 [2024-10-28 13:37:35.098797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:21.123 [2024-10-28 13:37:35.098829] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:21.123 [2024-10-28 13:37:35.099187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:26:21.123 [2024-10-28 13:37:35.099388] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:21.123 [2024-10-28 13:37:35.099406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:21.123 [2024-10-28 13:37:35.099712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:21.123 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.123 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:21.123 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:26:21.123 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.124 [ 00:26:21.124 { 00:26:21.124 "name": "NewBaseBdev", 00:26:21.124 "aliases": [ 00:26:21.124 "25c8cc5d-4a5d-4e5a-accf-2db54f0031ac" 00:26:21.124 ], 00:26:21.124 "product_name": "Malloc disk", 00:26:21.124 "block_size": 512, 00:26:21.124 "num_blocks": 65536, 00:26:21.124 "uuid": "25c8cc5d-4a5d-4e5a-accf-2db54f0031ac", 00:26:21.124 "assigned_rate_limits": { 00:26:21.124 "rw_ios_per_sec": 0, 00:26:21.124 "rw_mbytes_per_sec": 0, 00:26:21.124 "r_mbytes_per_sec": 0, 00:26:21.124 "w_mbytes_per_sec": 0 00:26:21.124 }, 00:26:21.124 "claimed": true, 00:26:21.124 "claim_type": "exclusive_write", 00:26:21.124 "zoned": false, 00:26:21.124 "supported_io_types": { 00:26:21.124 "read": true, 00:26:21.124 "write": true, 00:26:21.124 "unmap": true, 00:26:21.124 "flush": true, 00:26:21.124 "reset": true, 00:26:21.124 "nvme_admin": false, 00:26:21.124 "nvme_io": false, 00:26:21.124 "nvme_io_md": false, 00:26:21.124 "write_zeroes": true, 00:26:21.124 "zcopy": true, 00:26:21.124 "get_zone_info": false, 00:26:21.124 "zone_management": false, 00:26:21.124 "zone_append": false, 00:26:21.124 "compare": false, 00:26:21.124 "compare_and_write": false, 00:26:21.124 "abort": true, 00:26:21.124 "seek_hole": false, 00:26:21.124 "seek_data": false, 00:26:21.124 "copy": true, 00:26:21.124 "nvme_iov_md": false 00:26:21.124 }, 00:26:21.124 "memory_domains": [ 00:26:21.124 { 00:26:21.124 "dma_device_id": "system", 00:26:21.124 "dma_device_type": 1 00:26:21.124 }, 00:26:21.124 { 00:26:21.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.124 "dma_device_type": 2 00:26:21.124 } 00:26:21.124 ], 00:26:21.124 "driver_specific": {} 00:26:21.124 } 00:26:21.124 ] 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:21.124 "name": "Existed_Raid", 00:26:21.124 "uuid": "d5645468-5e12-4ea1-9c7b-e2b00b662a1b", 00:26:21.124 "strip_size_kb": 0, 00:26:21.124 "state": "online", 00:26:21.124 "raid_level": "raid1", 00:26:21.124 "superblock": false, 00:26:21.124 "num_base_bdevs": 3, 00:26:21.124 "num_base_bdevs_discovered": 3, 00:26:21.124 "num_base_bdevs_operational": 3, 00:26:21.124 "base_bdevs_list": [ 00:26:21.124 { 00:26:21.124 "name": "NewBaseBdev", 00:26:21.124 "uuid": "25c8cc5d-4a5d-4e5a-accf-2db54f0031ac", 00:26:21.124 "is_configured": true, 00:26:21.124 "data_offset": 0, 00:26:21.124 "data_size": 65536 00:26:21.124 }, 00:26:21.124 { 00:26:21.124 "name": "BaseBdev2", 00:26:21.124 "uuid": "456e2df1-8323-4d60-b475-bf12b4f056f5", 00:26:21.124 "is_configured": true, 00:26:21.124 "data_offset": 0, 00:26:21.124 "data_size": 65536 00:26:21.124 }, 00:26:21.124 { 00:26:21.124 "name": "BaseBdev3", 00:26:21.124 "uuid": "61e9e5f3-1f4a-417d-86e8-dd795811d913", 00:26:21.124 "is_configured": true, 00:26:21.124 "data_offset": 0, 00:26:21.124 "data_size": 65536 00:26:21.124 } 00:26:21.124 ] 00:26:21.124 }' 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:21.124 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.693 [2024-10-28 13:37:35.675102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:21.693 "name": "Existed_Raid", 00:26:21.693 "aliases": [ 00:26:21.693 "d5645468-5e12-4ea1-9c7b-e2b00b662a1b" 00:26:21.693 ], 00:26:21.693 "product_name": "Raid Volume", 00:26:21.693 "block_size": 512, 00:26:21.693 "num_blocks": 65536, 00:26:21.693 "uuid": "d5645468-5e12-4ea1-9c7b-e2b00b662a1b", 00:26:21.693 "assigned_rate_limits": { 00:26:21.693 "rw_ios_per_sec": 0, 00:26:21.693 "rw_mbytes_per_sec": 0, 00:26:21.693 "r_mbytes_per_sec": 0, 00:26:21.693 "w_mbytes_per_sec": 0 00:26:21.693 }, 00:26:21.693 "claimed": false, 00:26:21.693 "zoned": false, 00:26:21.693 "supported_io_types": { 00:26:21.693 "read": true, 00:26:21.693 "write": true, 00:26:21.693 "unmap": false, 00:26:21.693 "flush": false, 00:26:21.693 "reset": true, 00:26:21.693 "nvme_admin": false, 00:26:21.693 "nvme_io": false, 00:26:21.693 "nvme_io_md": false, 00:26:21.693 "write_zeroes": true, 00:26:21.693 "zcopy": false, 00:26:21.693 "get_zone_info": false, 00:26:21.693 "zone_management": false, 00:26:21.693 "zone_append": false, 00:26:21.693 "compare": false, 00:26:21.693 "compare_and_write": false, 00:26:21.693 "abort": false, 00:26:21.693 "seek_hole": false, 00:26:21.693 "seek_data": false, 00:26:21.693 "copy": false, 00:26:21.693 "nvme_iov_md": false 00:26:21.693 }, 00:26:21.693 "memory_domains": [ 00:26:21.693 { 00:26:21.693 "dma_device_id": "system", 00:26:21.693 "dma_device_type": 1 00:26:21.693 }, 00:26:21.693 { 00:26:21.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.693 "dma_device_type": 2 00:26:21.693 }, 00:26:21.693 { 00:26:21.693 "dma_device_id": "system", 00:26:21.693 "dma_device_type": 1 00:26:21.693 }, 00:26:21.693 { 00:26:21.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.693 "dma_device_type": 2 00:26:21.693 }, 00:26:21.693 { 00:26:21.693 "dma_device_id": "system", 00:26:21.693 "dma_device_type": 1 00:26:21.693 }, 00:26:21.693 { 00:26:21.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.693 "dma_device_type": 2 00:26:21.693 } 00:26:21.693 ], 00:26:21.693 "driver_specific": { 00:26:21.693 "raid": { 00:26:21.693 "uuid": "d5645468-5e12-4ea1-9c7b-e2b00b662a1b", 00:26:21.693 "strip_size_kb": 0, 00:26:21.693 "state": "online", 00:26:21.693 "raid_level": "raid1", 00:26:21.693 "superblock": false, 00:26:21.693 "num_base_bdevs": 3, 00:26:21.693 "num_base_bdevs_discovered": 3, 00:26:21.693 "num_base_bdevs_operational": 3, 00:26:21.693 "base_bdevs_list": [ 00:26:21.693 { 00:26:21.693 "name": "NewBaseBdev", 00:26:21.693 "uuid": "25c8cc5d-4a5d-4e5a-accf-2db54f0031ac", 00:26:21.693 "is_configured": true, 00:26:21.693 "data_offset": 0, 00:26:21.693 "data_size": 65536 00:26:21.693 }, 00:26:21.693 { 00:26:21.693 "name": "BaseBdev2", 00:26:21.693 "uuid": "456e2df1-8323-4d60-b475-bf12b4f056f5", 00:26:21.693 "is_configured": true, 00:26:21.693 "data_offset": 0, 00:26:21.693 "data_size": 65536 00:26:21.693 }, 00:26:21.693 { 00:26:21.693 "name": "BaseBdev3", 00:26:21.693 "uuid": "61e9e5f3-1f4a-417d-86e8-dd795811d913", 00:26:21.693 "is_configured": true, 00:26:21.693 "data_offset": 0, 00:26:21.693 "data_size": 65536 00:26:21.693 } 00:26:21.693 ] 00:26:21.693 } 00:26:21.693 } 00:26:21.693 }' 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:21.693 BaseBdev2 00:26:21.693 BaseBdev3' 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:21.693 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.952 [2024-10-28 13:37:35.986787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:21.952 [2024-10-28 13:37:35.986828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:21.952 [2024-10-28 13:37:35.986954] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:21.952 [2024-10-28 13:37:35.987340] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:21.952 [2024-10-28 13:37:35.987367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80196 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80196 ']' 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80196 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:21.952 13:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80196 00:26:21.952 killing process with pid 80196 00:26:21.953 13:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:21.953 13:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:21.953 13:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80196' 00:26:21.953 13:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80196 00:26:21.953 [2024-10-28 13:37:36.025944] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:21.953 13:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80196 00:26:21.953 [2024-10-28 13:37:36.081065] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:26:22.519 00:26:22.519 real 0m10.610s 00:26:22.519 user 0m18.501s 00:26:22.519 sys 0m1.679s 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:22.519 ************************************ 00:26:22.519 END TEST raid_state_function_test 00:26:22.519 ************************************ 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.519 13:37:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:26:22.519 13:37:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:22.519 13:37:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:22.519 13:37:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:22.519 ************************************ 00:26:22.519 START TEST raid_state_function_test_sb 00:26:22.519 ************************************ 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80818 00:26:22.519 Process raid pid: 80818 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80818' 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80818 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80818 ']' 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:22.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.519 13:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:22.520 13:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:22.520 [2024-10-28 13:37:36.557554] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:26:22.520 [2024-10-28 13:37:36.557760] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.777 [2024-10-28 13:37:36.713550] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:22.777 [2024-10-28 13:37:36.745234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.777 [2024-10-28 13:37:36.813116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.777 [2024-10-28 13:37:36.888680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:22.777 [2024-10-28 13:37:36.888745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:23.342 13:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:23.342 13:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:26:23.342 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:23.342 13:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.342 13:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.342 [2024-10-28 13:37:37.496296] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:23.342 [2024-10-28 13:37:37.496374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:23.342 [2024-10-28 13:37:37.496396] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:23.342 [2024-10-28 13:37:37.496409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:23.342 [2024-10-28 13:37:37.496427] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:23.342 [2024-10-28 13:37:37.496439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:23.600 "name": "Existed_Raid", 00:26:23.600 "uuid": "70064f7a-f14f-4369-b2be-530f72bcb1cc", 00:26:23.600 "strip_size_kb": 0, 00:26:23.600 "state": "configuring", 00:26:23.600 "raid_level": "raid1", 00:26:23.600 "superblock": true, 00:26:23.600 "num_base_bdevs": 3, 00:26:23.600 "num_base_bdevs_discovered": 0, 00:26:23.600 "num_base_bdevs_operational": 3, 00:26:23.600 "base_bdevs_list": [ 00:26:23.600 { 00:26:23.600 "name": "BaseBdev1", 00:26:23.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.600 "is_configured": false, 00:26:23.600 "data_offset": 0, 00:26:23.600 "data_size": 0 00:26:23.600 }, 00:26:23.600 { 00:26:23.600 "name": "BaseBdev2", 00:26:23.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.600 "is_configured": false, 00:26:23.600 "data_offset": 0, 00:26:23.600 "data_size": 0 00:26:23.600 }, 00:26:23.600 { 00:26:23.600 "name": "BaseBdev3", 00:26:23.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.600 "is_configured": false, 00:26:23.600 "data_offset": 0, 00:26:23.600 "data_size": 0 00:26:23.600 } 00:26:23.600 ] 00:26:23.600 }' 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:23.600 13:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.167 [2024-10-28 13:37:38.052287] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:24.167 [2024-10-28 13:37:38.052346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.167 [2024-10-28 13:37:38.060320] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:24.167 [2024-10-28 13:37:38.060377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:24.167 [2024-10-28 13:37:38.060396] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:24.167 [2024-10-28 13:37:38.060409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:24.167 [2024-10-28 13:37:38.060422] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:24.167 [2024-10-28 13:37:38.060434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.167 [2024-10-28 13:37:38.083755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:24.167 BaseBdev1 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.167 [ 00:26:24.167 { 00:26:24.167 "name": "BaseBdev1", 00:26:24.167 "aliases": [ 00:26:24.167 "b79a41e0-dc78-40f5-8e3e-530c90a2033f" 00:26:24.167 ], 00:26:24.167 "product_name": "Malloc disk", 00:26:24.167 "block_size": 512, 00:26:24.167 "num_blocks": 65536, 00:26:24.167 "uuid": "b79a41e0-dc78-40f5-8e3e-530c90a2033f", 00:26:24.167 "assigned_rate_limits": { 00:26:24.167 "rw_ios_per_sec": 0, 00:26:24.167 "rw_mbytes_per_sec": 0, 00:26:24.167 "r_mbytes_per_sec": 0, 00:26:24.167 "w_mbytes_per_sec": 0 00:26:24.167 }, 00:26:24.167 "claimed": true, 00:26:24.167 "claim_type": "exclusive_write", 00:26:24.167 "zoned": false, 00:26:24.167 "supported_io_types": { 00:26:24.167 "read": true, 00:26:24.167 "write": true, 00:26:24.167 "unmap": true, 00:26:24.167 "flush": true, 00:26:24.167 "reset": true, 00:26:24.167 "nvme_admin": false, 00:26:24.167 "nvme_io": false, 00:26:24.167 "nvme_io_md": false, 00:26:24.167 "write_zeroes": true, 00:26:24.167 "zcopy": true, 00:26:24.167 "get_zone_info": false, 00:26:24.167 "zone_management": false, 00:26:24.167 "zone_append": false, 00:26:24.167 "compare": false, 00:26:24.167 "compare_and_write": false, 00:26:24.167 "abort": true, 00:26:24.167 "seek_hole": false, 00:26:24.167 "seek_data": false, 00:26:24.167 "copy": true, 00:26:24.167 "nvme_iov_md": false 00:26:24.167 }, 00:26:24.167 "memory_domains": [ 00:26:24.167 { 00:26:24.167 "dma_device_id": "system", 00:26:24.167 "dma_device_type": 1 00:26:24.167 }, 00:26:24.167 { 00:26:24.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.167 "dma_device_type": 2 00:26:24.167 } 00:26:24.167 ], 00:26:24.167 "driver_specific": {} 00:26:24.167 } 00:26:24.167 ] 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:24.167 "name": "Existed_Raid", 00:26:24.167 "uuid": "da6c5154-be82-43b0-9ec1-1fd90f3dca88", 00:26:24.167 "strip_size_kb": 0, 00:26:24.167 "state": "configuring", 00:26:24.167 "raid_level": "raid1", 00:26:24.167 "superblock": true, 00:26:24.167 "num_base_bdevs": 3, 00:26:24.167 "num_base_bdevs_discovered": 1, 00:26:24.167 "num_base_bdevs_operational": 3, 00:26:24.167 "base_bdevs_list": [ 00:26:24.167 { 00:26:24.167 "name": "BaseBdev1", 00:26:24.167 "uuid": "b79a41e0-dc78-40f5-8e3e-530c90a2033f", 00:26:24.167 "is_configured": true, 00:26:24.167 "data_offset": 2048, 00:26:24.167 "data_size": 63488 00:26:24.167 }, 00:26:24.167 { 00:26:24.167 "name": "BaseBdev2", 00:26:24.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.167 "is_configured": false, 00:26:24.167 "data_offset": 0, 00:26:24.167 "data_size": 0 00:26:24.167 }, 00:26:24.167 { 00:26:24.167 "name": "BaseBdev3", 00:26:24.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.167 "is_configured": false, 00:26:24.167 "data_offset": 0, 00:26:24.167 "data_size": 0 00:26:24.167 } 00:26:24.167 ] 00:26:24.167 }' 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:24.167 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.735 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.736 [2024-10-28 13:37:38.663966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:24.736 [2024-10-28 13:37:38.664076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.736 [2024-10-28 13:37:38.675997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:24.736 [2024-10-28 13:37:38.678741] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:24.736 [2024-10-28 13:37:38.678800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:24.736 [2024-10-28 13:37:38.678822] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:24.736 [2024-10-28 13:37:38.678835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:24.736 "name": "Existed_Raid", 00:26:24.736 "uuid": "a04e5cc4-af03-4db7-b9e6-5fb7bd9022cd", 00:26:24.736 "strip_size_kb": 0, 00:26:24.736 "state": "configuring", 00:26:24.736 "raid_level": "raid1", 00:26:24.736 "superblock": true, 00:26:24.736 "num_base_bdevs": 3, 00:26:24.736 "num_base_bdevs_discovered": 1, 00:26:24.736 "num_base_bdevs_operational": 3, 00:26:24.736 "base_bdevs_list": [ 00:26:24.736 { 00:26:24.736 "name": "BaseBdev1", 00:26:24.736 "uuid": "b79a41e0-dc78-40f5-8e3e-530c90a2033f", 00:26:24.736 "is_configured": true, 00:26:24.736 "data_offset": 2048, 00:26:24.736 "data_size": 63488 00:26:24.736 }, 00:26:24.736 { 00:26:24.736 "name": "BaseBdev2", 00:26:24.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.736 "is_configured": false, 00:26:24.736 "data_offset": 0, 00:26:24.736 "data_size": 0 00:26:24.736 }, 00:26:24.736 { 00:26:24.736 "name": "BaseBdev3", 00:26:24.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.736 "is_configured": false, 00:26:24.736 "data_offset": 0, 00:26:24.736 "data_size": 0 00:26:24.736 } 00:26:24.736 ] 00:26:24.736 }' 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:24.736 13:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.304 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:25.304 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.304 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.305 [2024-10-28 13:37:39.220502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:25.305 BaseBdev2 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.305 [ 00:26:25.305 { 00:26:25.305 "name": "BaseBdev2", 00:26:25.305 "aliases": [ 00:26:25.305 "b132efc4-5f18-40de-914e-3ec4a7aad54f" 00:26:25.305 ], 00:26:25.305 "product_name": "Malloc disk", 00:26:25.305 "block_size": 512, 00:26:25.305 "num_blocks": 65536, 00:26:25.305 "uuid": "b132efc4-5f18-40de-914e-3ec4a7aad54f", 00:26:25.305 "assigned_rate_limits": { 00:26:25.305 "rw_ios_per_sec": 0, 00:26:25.305 "rw_mbytes_per_sec": 0, 00:26:25.305 "r_mbytes_per_sec": 0, 00:26:25.305 "w_mbytes_per_sec": 0 00:26:25.305 }, 00:26:25.305 "claimed": true, 00:26:25.305 "claim_type": "exclusive_write", 00:26:25.305 "zoned": false, 00:26:25.305 "supported_io_types": { 00:26:25.305 "read": true, 00:26:25.305 "write": true, 00:26:25.305 "unmap": true, 00:26:25.305 "flush": true, 00:26:25.305 "reset": true, 00:26:25.305 "nvme_admin": false, 00:26:25.305 "nvme_io": false, 00:26:25.305 "nvme_io_md": false, 00:26:25.305 "write_zeroes": true, 00:26:25.305 "zcopy": true, 00:26:25.305 "get_zone_info": false, 00:26:25.305 "zone_management": false, 00:26:25.305 "zone_append": false, 00:26:25.305 "compare": false, 00:26:25.305 "compare_and_write": false, 00:26:25.305 "abort": true, 00:26:25.305 "seek_hole": false, 00:26:25.305 "seek_data": false, 00:26:25.305 "copy": true, 00:26:25.305 "nvme_iov_md": false 00:26:25.305 }, 00:26:25.305 "memory_domains": [ 00:26:25.305 { 00:26:25.305 "dma_device_id": "system", 00:26:25.305 "dma_device_type": 1 00:26:25.305 }, 00:26:25.305 { 00:26:25.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.305 "dma_device_type": 2 00:26:25.305 } 00:26:25.305 ], 00:26:25.305 "driver_specific": {} 00:26:25.305 } 00:26:25.305 ] 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.305 "name": "Existed_Raid", 00:26:25.305 "uuid": "a04e5cc4-af03-4db7-b9e6-5fb7bd9022cd", 00:26:25.305 "strip_size_kb": 0, 00:26:25.305 "state": "configuring", 00:26:25.305 "raid_level": "raid1", 00:26:25.305 "superblock": true, 00:26:25.305 "num_base_bdevs": 3, 00:26:25.305 "num_base_bdevs_discovered": 2, 00:26:25.305 "num_base_bdevs_operational": 3, 00:26:25.305 "base_bdevs_list": [ 00:26:25.305 { 00:26:25.305 "name": "BaseBdev1", 00:26:25.305 "uuid": "b79a41e0-dc78-40f5-8e3e-530c90a2033f", 00:26:25.305 "is_configured": true, 00:26:25.305 "data_offset": 2048, 00:26:25.305 "data_size": 63488 00:26:25.305 }, 00:26:25.305 { 00:26:25.305 "name": "BaseBdev2", 00:26:25.305 "uuid": "b132efc4-5f18-40de-914e-3ec4a7aad54f", 00:26:25.305 "is_configured": true, 00:26:25.305 "data_offset": 2048, 00:26:25.305 "data_size": 63488 00:26:25.305 }, 00:26:25.305 { 00:26:25.305 "name": "BaseBdev3", 00:26:25.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.305 "is_configured": false, 00:26:25.305 "data_offset": 0, 00:26:25.305 "data_size": 0 00:26:25.305 } 00:26:25.305 ] 00:26:25.305 }' 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.305 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.872 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:25.872 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.872 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.872 [2024-10-28 13:37:39.840789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:25.872 [2024-10-28 13:37:39.841117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:26:25.872 [2024-10-28 13:37:39.841165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:25.872 BaseBdev3 00:26:25.872 [2024-10-28 13:37:39.841568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:25.872 [2024-10-28 13:37:39.841770] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:26:25.872 [2024-10-28 13:37:39.841802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:26:25.872 [2024-10-28 13:37:39.841967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:25.872 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.872 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:25.872 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:26:25.872 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:25.872 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:25.872 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:25.872 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:25.872 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:25.872 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.872 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.872 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.873 [ 00:26:25.873 { 00:26:25.873 "name": "BaseBdev3", 00:26:25.873 "aliases": [ 00:26:25.873 "a64a0495-8b08-4406-bb03-4b27d0fe9a46" 00:26:25.873 ], 00:26:25.873 "product_name": "Malloc disk", 00:26:25.873 "block_size": 512, 00:26:25.873 "num_blocks": 65536, 00:26:25.873 "uuid": "a64a0495-8b08-4406-bb03-4b27d0fe9a46", 00:26:25.873 "assigned_rate_limits": { 00:26:25.873 "rw_ios_per_sec": 0, 00:26:25.873 "rw_mbytes_per_sec": 0, 00:26:25.873 "r_mbytes_per_sec": 0, 00:26:25.873 "w_mbytes_per_sec": 0 00:26:25.873 }, 00:26:25.873 "claimed": true, 00:26:25.873 "claim_type": "exclusive_write", 00:26:25.873 "zoned": false, 00:26:25.873 "supported_io_types": { 00:26:25.873 "read": true, 00:26:25.873 "write": true, 00:26:25.873 "unmap": true, 00:26:25.873 "flush": true, 00:26:25.873 "reset": true, 00:26:25.873 "nvme_admin": false, 00:26:25.873 "nvme_io": false, 00:26:25.873 "nvme_io_md": false, 00:26:25.873 "write_zeroes": true, 00:26:25.873 "zcopy": true, 00:26:25.873 "get_zone_info": false, 00:26:25.873 "zone_management": false, 00:26:25.873 "zone_append": false, 00:26:25.873 "compare": false, 00:26:25.873 "compare_and_write": false, 00:26:25.873 "abort": true, 00:26:25.873 "seek_hole": false, 00:26:25.873 "seek_data": false, 00:26:25.873 "copy": true, 00:26:25.873 "nvme_iov_md": false 00:26:25.873 }, 00:26:25.873 "memory_domains": [ 00:26:25.873 { 00:26:25.873 "dma_device_id": "system", 00:26:25.873 "dma_device_type": 1 00:26:25.873 }, 00:26:25.873 { 00:26:25.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.873 "dma_device_type": 2 00:26:25.873 } 00:26:25.873 ], 00:26:25.873 "driver_specific": {} 00:26:25.873 } 00:26:25.873 ] 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.873 "name": "Existed_Raid", 00:26:25.873 "uuid": "a04e5cc4-af03-4db7-b9e6-5fb7bd9022cd", 00:26:25.873 "strip_size_kb": 0, 00:26:25.873 "state": "online", 00:26:25.873 "raid_level": "raid1", 00:26:25.873 "superblock": true, 00:26:25.873 "num_base_bdevs": 3, 00:26:25.873 "num_base_bdevs_discovered": 3, 00:26:25.873 "num_base_bdevs_operational": 3, 00:26:25.873 "base_bdevs_list": [ 00:26:25.873 { 00:26:25.873 "name": "BaseBdev1", 00:26:25.873 "uuid": "b79a41e0-dc78-40f5-8e3e-530c90a2033f", 00:26:25.873 "is_configured": true, 00:26:25.873 "data_offset": 2048, 00:26:25.873 "data_size": 63488 00:26:25.873 }, 00:26:25.873 { 00:26:25.873 "name": "BaseBdev2", 00:26:25.873 "uuid": "b132efc4-5f18-40de-914e-3ec4a7aad54f", 00:26:25.873 "is_configured": true, 00:26:25.873 "data_offset": 2048, 00:26:25.873 "data_size": 63488 00:26:25.873 }, 00:26:25.873 { 00:26:25.873 "name": "BaseBdev3", 00:26:25.873 "uuid": "a64a0495-8b08-4406-bb03-4b27d0fe9a46", 00:26:25.873 "is_configured": true, 00:26:25.873 "data_offset": 2048, 00:26:25.873 "data_size": 63488 00:26:25.873 } 00:26:25.873 ] 00:26:25.873 }' 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.873 13:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.441 [2024-10-28 13:37:40.409489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:26.441 "name": "Existed_Raid", 00:26:26.441 "aliases": [ 00:26:26.441 "a04e5cc4-af03-4db7-b9e6-5fb7bd9022cd" 00:26:26.441 ], 00:26:26.441 "product_name": "Raid Volume", 00:26:26.441 "block_size": 512, 00:26:26.441 "num_blocks": 63488, 00:26:26.441 "uuid": "a04e5cc4-af03-4db7-b9e6-5fb7bd9022cd", 00:26:26.441 "assigned_rate_limits": { 00:26:26.441 "rw_ios_per_sec": 0, 00:26:26.441 "rw_mbytes_per_sec": 0, 00:26:26.441 "r_mbytes_per_sec": 0, 00:26:26.441 "w_mbytes_per_sec": 0 00:26:26.441 }, 00:26:26.441 "claimed": false, 00:26:26.441 "zoned": false, 00:26:26.441 "supported_io_types": { 00:26:26.441 "read": true, 00:26:26.441 "write": true, 00:26:26.441 "unmap": false, 00:26:26.441 "flush": false, 00:26:26.441 "reset": true, 00:26:26.441 "nvme_admin": false, 00:26:26.441 "nvme_io": false, 00:26:26.441 "nvme_io_md": false, 00:26:26.441 "write_zeroes": true, 00:26:26.441 "zcopy": false, 00:26:26.441 "get_zone_info": false, 00:26:26.441 "zone_management": false, 00:26:26.441 "zone_append": false, 00:26:26.441 "compare": false, 00:26:26.441 "compare_and_write": false, 00:26:26.441 "abort": false, 00:26:26.441 "seek_hole": false, 00:26:26.441 "seek_data": false, 00:26:26.441 "copy": false, 00:26:26.441 "nvme_iov_md": false 00:26:26.441 }, 00:26:26.441 "memory_domains": [ 00:26:26.441 { 00:26:26.441 "dma_device_id": "system", 00:26:26.441 "dma_device_type": 1 00:26:26.441 }, 00:26:26.441 { 00:26:26.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.441 "dma_device_type": 2 00:26:26.441 }, 00:26:26.441 { 00:26:26.441 "dma_device_id": "system", 00:26:26.441 "dma_device_type": 1 00:26:26.441 }, 00:26:26.441 { 00:26:26.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.441 "dma_device_type": 2 00:26:26.441 }, 00:26:26.441 { 00:26:26.441 "dma_device_id": "system", 00:26:26.441 "dma_device_type": 1 00:26:26.441 }, 00:26:26.441 { 00:26:26.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.441 "dma_device_type": 2 00:26:26.441 } 00:26:26.441 ], 00:26:26.441 "driver_specific": { 00:26:26.441 "raid": { 00:26:26.441 "uuid": "a04e5cc4-af03-4db7-b9e6-5fb7bd9022cd", 00:26:26.441 "strip_size_kb": 0, 00:26:26.441 "state": "online", 00:26:26.441 "raid_level": "raid1", 00:26:26.441 "superblock": true, 00:26:26.441 "num_base_bdevs": 3, 00:26:26.441 "num_base_bdevs_discovered": 3, 00:26:26.441 "num_base_bdevs_operational": 3, 00:26:26.441 "base_bdevs_list": [ 00:26:26.441 { 00:26:26.441 "name": "BaseBdev1", 00:26:26.441 "uuid": "b79a41e0-dc78-40f5-8e3e-530c90a2033f", 00:26:26.441 "is_configured": true, 00:26:26.441 "data_offset": 2048, 00:26:26.441 "data_size": 63488 00:26:26.441 }, 00:26:26.441 { 00:26:26.441 "name": "BaseBdev2", 00:26:26.441 "uuid": "b132efc4-5f18-40de-914e-3ec4a7aad54f", 00:26:26.441 "is_configured": true, 00:26:26.441 "data_offset": 2048, 00:26:26.441 "data_size": 63488 00:26:26.441 }, 00:26:26.441 { 00:26:26.441 "name": "BaseBdev3", 00:26:26.441 "uuid": "a64a0495-8b08-4406-bb03-4b27d0fe9a46", 00:26:26.441 "is_configured": true, 00:26:26.441 "data_offset": 2048, 00:26:26.441 "data_size": 63488 00:26:26.441 } 00:26:26.441 ] 00:26:26.441 } 00:26:26.441 } 00:26:26.441 }' 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:26.441 BaseBdev2 00:26:26.441 BaseBdev3' 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.441 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.701 [2024-10-28 13:37:40.725257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:26.701 "name": "Existed_Raid", 00:26:26.701 "uuid": "a04e5cc4-af03-4db7-b9e6-5fb7bd9022cd", 00:26:26.701 "strip_size_kb": 0, 00:26:26.701 "state": "online", 00:26:26.701 "raid_level": "raid1", 00:26:26.701 "superblock": true, 00:26:26.701 "num_base_bdevs": 3, 00:26:26.701 "num_base_bdevs_discovered": 2, 00:26:26.701 "num_base_bdevs_operational": 2, 00:26:26.701 "base_bdevs_list": [ 00:26:26.701 { 00:26:26.701 "name": null, 00:26:26.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.701 "is_configured": false, 00:26:26.701 "data_offset": 0, 00:26:26.701 "data_size": 63488 00:26:26.701 }, 00:26:26.701 { 00:26:26.701 "name": "BaseBdev2", 00:26:26.701 "uuid": "b132efc4-5f18-40de-914e-3ec4a7aad54f", 00:26:26.701 "is_configured": true, 00:26:26.701 "data_offset": 2048, 00:26:26.701 "data_size": 63488 00:26:26.701 }, 00:26:26.701 { 00:26:26.701 "name": "BaseBdev3", 00:26:26.701 "uuid": "a64a0495-8b08-4406-bb03-4b27d0fe9a46", 00:26:26.701 "is_configured": true, 00:26:26.701 "data_offset": 2048, 00:26:26.701 "data_size": 63488 00:26:26.701 } 00:26:26.701 ] 00:26:26.701 }' 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:26.701 13:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.268 [2024-10-28 13:37:41.300073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.268 [2024-10-28 13:37:41.370616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:27.268 [2024-10-28 13:37:41.370813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:27.268 [2024-10-28 13:37:41.389586] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:27.268 [2024-10-28 13:37:41.389673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:27.268 [2024-10-28 13:37:41.389691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:27.268 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.539 BaseBdev2 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.539 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.540 [ 00:26:27.540 { 00:26:27.540 "name": "BaseBdev2", 00:26:27.540 "aliases": [ 00:26:27.540 "b1e33f5d-60e2-4cc7-9c6b-5a19985c14d3" 00:26:27.540 ], 00:26:27.540 "product_name": "Malloc disk", 00:26:27.540 "block_size": 512, 00:26:27.540 "num_blocks": 65536, 00:26:27.540 "uuid": "b1e33f5d-60e2-4cc7-9c6b-5a19985c14d3", 00:26:27.540 "assigned_rate_limits": { 00:26:27.540 "rw_ios_per_sec": 0, 00:26:27.540 "rw_mbytes_per_sec": 0, 00:26:27.540 "r_mbytes_per_sec": 0, 00:26:27.540 "w_mbytes_per_sec": 0 00:26:27.540 }, 00:26:27.540 "claimed": false, 00:26:27.540 "zoned": false, 00:26:27.540 "supported_io_types": { 00:26:27.540 "read": true, 00:26:27.540 "write": true, 00:26:27.540 "unmap": true, 00:26:27.540 "flush": true, 00:26:27.540 "reset": true, 00:26:27.540 "nvme_admin": false, 00:26:27.540 "nvme_io": false, 00:26:27.540 "nvme_io_md": false, 00:26:27.540 "write_zeroes": true, 00:26:27.540 "zcopy": true, 00:26:27.540 "get_zone_info": false, 00:26:27.540 "zone_management": false, 00:26:27.540 "zone_append": false, 00:26:27.540 "compare": false, 00:26:27.540 "compare_and_write": false, 00:26:27.540 "abort": true, 00:26:27.540 "seek_hole": false, 00:26:27.540 "seek_data": false, 00:26:27.540 "copy": true, 00:26:27.540 "nvme_iov_md": false 00:26:27.540 }, 00:26:27.540 "memory_domains": [ 00:26:27.540 { 00:26:27.540 "dma_device_id": "system", 00:26:27.540 "dma_device_type": 1 00:26:27.540 }, 00:26:27.540 { 00:26:27.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:27.540 "dma_device_type": 2 00:26:27.540 } 00:26:27.540 ], 00:26:27.540 "driver_specific": {} 00:26:27.540 } 00:26:27.540 ] 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.540 BaseBdev3 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.540 [ 00:26:27.540 { 00:26:27.540 "name": "BaseBdev3", 00:26:27.540 "aliases": [ 00:26:27.540 "8ec9dad8-13c0-41a9-a9f6-bc0894bddec0" 00:26:27.540 ], 00:26:27.540 "product_name": "Malloc disk", 00:26:27.540 "block_size": 512, 00:26:27.540 "num_blocks": 65536, 00:26:27.540 "uuid": "8ec9dad8-13c0-41a9-a9f6-bc0894bddec0", 00:26:27.540 "assigned_rate_limits": { 00:26:27.540 "rw_ios_per_sec": 0, 00:26:27.540 "rw_mbytes_per_sec": 0, 00:26:27.540 "r_mbytes_per_sec": 0, 00:26:27.540 "w_mbytes_per_sec": 0 00:26:27.540 }, 00:26:27.540 "claimed": false, 00:26:27.540 "zoned": false, 00:26:27.540 "supported_io_types": { 00:26:27.540 "read": true, 00:26:27.540 "write": true, 00:26:27.540 "unmap": true, 00:26:27.540 "flush": true, 00:26:27.540 "reset": true, 00:26:27.540 "nvme_admin": false, 00:26:27.540 "nvme_io": false, 00:26:27.540 "nvme_io_md": false, 00:26:27.540 "write_zeroes": true, 00:26:27.540 "zcopy": true, 00:26:27.540 "get_zone_info": false, 00:26:27.540 "zone_management": false, 00:26:27.540 "zone_append": false, 00:26:27.540 "compare": false, 00:26:27.540 "compare_and_write": false, 00:26:27.540 "abort": true, 00:26:27.540 "seek_hole": false, 00:26:27.540 "seek_data": false, 00:26:27.540 "copy": true, 00:26:27.540 "nvme_iov_md": false 00:26:27.540 }, 00:26:27.540 "memory_domains": [ 00:26:27.540 { 00:26:27.540 "dma_device_id": "system", 00:26:27.540 "dma_device_type": 1 00:26:27.540 }, 00:26:27.540 { 00:26:27.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:27.540 "dma_device_type": 2 00:26:27.540 } 00:26:27.540 ], 00:26:27.540 "driver_specific": {} 00:26:27.540 } 00:26:27.540 ] 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.540 [2024-10-28 13:37:41.559592] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:27.540 [2024-10-28 13:37:41.559652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:27.540 [2024-10-28 13:37:41.559687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:27.540 [2024-10-28 13:37:41.562474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:27.540 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:27.541 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:27.541 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.541 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:27.541 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.541 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.541 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.541 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:27.541 "name": "Existed_Raid", 00:26:27.541 "uuid": "cd60cf79-02d6-4d04-ab20-bf18a9297b94", 00:26:27.541 "strip_size_kb": 0, 00:26:27.541 "state": "configuring", 00:26:27.541 "raid_level": "raid1", 00:26:27.541 "superblock": true, 00:26:27.541 "num_base_bdevs": 3, 00:26:27.541 "num_base_bdevs_discovered": 2, 00:26:27.541 "num_base_bdevs_operational": 3, 00:26:27.541 "base_bdevs_list": [ 00:26:27.541 { 00:26:27.541 "name": "BaseBdev1", 00:26:27.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.541 "is_configured": false, 00:26:27.541 "data_offset": 0, 00:26:27.541 "data_size": 0 00:26:27.541 }, 00:26:27.541 { 00:26:27.541 "name": "BaseBdev2", 00:26:27.541 "uuid": "b1e33f5d-60e2-4cc7-9c6b-5a19985c14d3", 00:26:27.541 "is_configured": true, 00:26:27.541 "data_offset": 2048, 00:26:27.541 "data_size": 63488 00:26:27.541 }, 00:26:27.541 { 00:26:27.541 "name": "BaseBdev3", 00:26:27.541 "uuid": "8ec9dad8-13c0-41a9-a9f6-bc0894bddec0", 00:26:27.541 "is_configured": true, 00:26:27.541 "data_offset": 2048, 00:26:27.541 "data_size": 63488 00:26:27.541 } 00:26:27.541 ] 00:26:27.541 }' 00:26:27.541 13:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:27.541 13:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.109 [2024-10-28 13:37:42.107759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:28.109 "name": "Existed_Raid", 00:26:28.109 "uuid": "cd60cf79-02d6-4d04-ab20-bf18a9297b94", 00:26:28.109 "strip_size_kb": 0, 00:26:28.109 "state": "configuring", 00:26:28.109 "raid_level": "raid1", 00:26:28.109 "superblock": true, 00:26:28.109 "num_base_bdevs": 3, 00:26:28.109 "num_base_bdevs_discovered": 1, 00:26:28.109 "num_base_bdevs_operational": 3, 00:26:28.109 "base_bdevs_list": [ 00:26:28.109 { 00:26:28.109 "name": "BaseBdev1", 00:26:28.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.109 "is_configured": false, 00:26:28.109 "data_offset": 0, 00:26:28.109 "data_size": 0 00:26:28.109 }, 00:26:28.109 { 00:26:28.109 "name": null, 00:26:28.109 "uuid": "b1e33f5d-60e2-4cc7-9c6b-5a19985c14d3", 00:26:28.109 "is_configured": false, 00:26:28.109 "data_offset": 0, 00:26:28.109 "data_size": 63488 00:26:28.109 }, 00:26:28.109 { 00:26:28.109 "name": "BaseBdev3", 00:26:28.109 "uuid": "8ec9dad8-13c0-41a9-a9f6-bc0894bddec0", 00:26:28.109 "is_configured": true, 00:26:28.109 "data_offset": 2048, 00:26:28.109 "data_size": 63488 00:26:28.109 } 00:26:28.109 ] 00:26:28.109 }' 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:28.109 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.676 [2024-10-28 13:37:42.692215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:28.676 BaseBdev1 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.676 [ 00:26:28.676 { 00:26:28.676 "name": "BaseBdev1", 00:26:28.676 "aliases": [ 00:26:28.676 "c99fcb68-65cb-4aa7-b017-9891d5ae6f3c" 00:26:28.676 ], 00:26:28.676 "product_name": "Malloc disk", 00:26:28.676 "block_size": 512, 00:26:28.676 "num_blocks": 65536, 00:26:28.676 "uuid": "c99fcb68-65cb-4aa7-b017-9891d5ae6f3c", 00:26:28.676 "assigned_rate_limits": { 00:26:28.676 "rw_ios_per_sec": 0, 00:26:28.676 "rw_mbytes_per_sec": 0, 00:26:28.676 "r_mbytes_per_sec": 0, 00:26:28.676 "w_mbytes_per_sec": 0 00:26:28.676 }, 00:26:28.676 "claimed": true, 00:26:28.676 "claim_type": "exclusive_write", 00:26:28.676 "zoned": false, 00:26:28.676 "supported_io_types": { 00:26:28.676 "read": true, 00:26:28.676 "write": true, 00:26:28.676 "unmap": true, 00:26:28.676 "flush": true, 00:26:28.676 "reset": true, 00:26:28.676 "nvme_admin": false, 00:26:28.676 "nvme_io": false, 00:26:28.676 "nvme_io_md": false, 00:26:28.676 "write_zeroes": true, 00:26:28.676 "zcopy": true, 00:26:28.676 "get_zone_info": false, 00:26:28.676 "zone_management": false, 00:26:28.676 "zone_append": false, 00:26:28.676 "compare": false, 00:26:28.676 "compare_and_write": false, 00:26:28.676 "abort": true, 00:26:28.676 "seek_hole": false, 00:26:28.676 "seek_data": false, 00:26:28.676 "copy": true, 00:26:28.676 "nvme_iov_md": false 00:26:28.676 }, 00:26:28.676 "memory_domains": [ 00:26:28.676 { 00:26:28.676 "dma_device_id": "system", 00:26:28.676 "dma_device_type": 1 00:26:28.676 }, 00:26:28.676 { 00:26:28.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:28.676 "dma_device_type": 2 00:26:28.676 } 00:26:28.676 ], 00:26:28.676 "driver_specific": {} 00:26:28.676 } 00:26:28.676 ] 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:28.676 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.677 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.677 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:28.677 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.677 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:28.677 "name": "Existed_Raid", 00:26:28.677 "uuid": "cd60cf79-02d6-4d04-ab20-bf18a9297b94", 00:26:28.677 "strip_size_kb": 0, 00:26:28.677 "state": "configuring", 00:26:28.677 "raid_level": "raid1", 00:26:28.677 "superblock": true, 00:26:28.677 "num_base_bdevs": 3, 00:26:28.677 "num_base_bdevs_discovered": 2, 00:26:28.677 "num_base_bdevs_operational": 3, 00:26:28.677 "base_bdevs_list": [ 00:26:28.677 { 00:26:28.677 "name": "BaseBdev1", 00:26:28.677 "uuid": "c99fcb68-65cb-4aa7-b017-9891d5ae6f3c", 00:26:28.677 "is_configured": true, 00:26:28.677 "data_offset": 2048, 00:26:28.677 "data_size": 63488 00:26:28.677 }, 00:26:28.677 { 00:26:28.677 "name": null, 00:26:28.677 "uuid": "b1e33f5d-60e2-4cc7-9c6b-5a19985c14d3", 00:26:28.677 "is_configured": false, 00:26:28.677 "data_offset": 0, 00:26:28.677 "data_size": 63488 00:26:28.677 }, 00:26:28.677 { 00:26:28.677 "name": "BaseBdev3", 00:26:28.677 "uuid": "8ec9dad8-13c0-41a9-a9f6-bc0894bddec0", 00:26:28.677 "is_configured": true, 00:26:28.677 "data_offset": 2048, 00:26:28.677 "data_size": 63488 00:26:28.677 } 00:26:28.677 ] 00:26:28.677 }' 00:26:28.677 13:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:28.677 13:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.245 [2024-10-28 13:37:43.296545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.245 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:29.246 "name": "Existed_Raid", 00:26:29.246 "uuid": "cd60cf79-02d6-4d04-ab20-bf18a9297b94", 00:26:29.246 "strip_size_kb": 0, 00:26:29.246 "state": "configuring", 00:26:29.246 "raid_level": "raid1", 00:26:29.246 "superblock": true, 00:26:29.246 "num_base_bdevs": 3, 00:26:29.246 "num_base_bdevs_discovered": 1, 00:26:29.246 "num_base_bdevs_operational": 3, 00:26:29.246 "base_bdevs_list": [ 00:26:29.246 { 00:26:29.246 "name": "BaseBdev1", 00:26:29.246 "uuid": "c99fcb68-65cb-4aa7-b017-9891d5ae6f3c", 00:26:29.246 "is_configured": true, 00:26:29.246 "data_offset": 2048, 00:26:29.246 "data_size": 63488 00:26:29.246 }, 00:26:29.246 { 00:26:29.246 "name": null, 00:26:29.246 "uuid": "b1e33f5d-60e2-4cc7-9c6b-5a19985c14d3", 00:26:29.246 "is_configured": false, 00:26:29.246 "data_offset": 0, 00:26:29.246 "data_size": 63488 00:26:29.246 }, 00:26:29.246 { 00:26:29.246 "name": null, 00:26:29.246 "uuid": "8ec9dad8-13c0-41a9-a9f6-bc0894bddec0", 00:26:29.246 "is_configured": false, 00:26:29.246 "data_offset": 0, 00:26:29.246 "data_size": 63488 00:26:29.246 } 00:26:29.246 ] 00:26:29.246 }' 00:26:29.246 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:29.246 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.814 [2024-10-28 13:37:43.876737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:29.814 "name": "Existed_Raid", 00:26:29.814 "uuid": "cd60cf79-02d6-4d04-ab20-bf18a9297b94", 00:26:29.814 "strip_size_kb": 0, 00:26:29.814 "state": "configuring", 00:26:29.814 "raid_level": "raid1", 00:26:29.814 "superblock": true, 00:26:29.814 "num_base_bdevs": 3, 00:26:29.814 "num_base_bdevs_discovered": 2, 00:26:29.814 "num_base_bdevs_operational": 3, 00:26:29.814 "base_bdevs_list": [ 00:26:29.814 { 00:26:29.814 "name": "BaseBdev1", 00:26:29.814 "uuid": "c99fcb68-65cb-4aa7-b017-9891d5ae6f3c", 00:26:29.814 "is_configured": true, 00:26:29.814 "data_offset": 2048, 00:26:29.814 "data_size": 63488 00:26:29.814 }, 00:26:29.814 { 00:26:29.814 "name": null, 00:26:29.814 "uuid": "b1e33f5d-60e2-4cc7-9c6b-5a19985c14d3", 00:26:29.814 "is_configured": false, 00:26:29.814 "data_offset": 0, 00:26:29.814 "data_size": 63488 00:26:29.814 }, 00:26:29.814 { 00:26:29.814 "name": "BaseBdev3", 00:26:29.814 "uuid": "8ec9dad8-13c0-41a9-a9f6-bc0894bddec0", 00:26:29.814 "is_configured": true, 00:26:29.814 "data_offset": 2048, 00:26:29.814 "data_size": 63488 00:26:29.814 } 00:26:29.814 ] 00:26:29.814 }' 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:29.814 13:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.382 [2024-10-28 13:37:44.441228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:30.382 "name": "Existed_Raid", 00:26:30.382 "uuid": "cd60cf79-02d6-4d04-ab20-bf18a9297b94", 00:26:30.382 "strip_size_kb": 0, 00:26:30.382 "state": "configuring", 00:26:30.382 "raid_level": "raid1", 00:26:30.382 "superblock": true, 00:26:30.382 "num_base_bdevs": 3, 00:26:30.382 "num_base_bdevs_discovered": 1, 00:26:30.382 "num_base_bdevs_operational": 3, 00:26:30.382 "base_bdevs_list": [ 00:26:30.382 { 00:26:30.382 "name": null, 00:26:30.382 "uuid": "c99fcb68-65cb-4aa7-b017-9891d5ae6f3c", 00:26:30.382 "is_configured": false, 00:26:30.382 "data_offset": 0, 00:26:30.382 "data_size": 63488 00:26:30.382 }, 00:26:30.382 { 00:26:30.382 "name": null, 00:26:30.382 "uuid": "b1e33f5d-60e2-4cc7-9c6b-5a19985c14d3", 00:26:30.382 "is_configured": false, 00:26:30.382 "data_offset": 0, 00:26:30.382 "data_size": 63488 00:26:30.382 }, 00:26:30.382 { 00:26:30.382 "name": "BaseBdev3", 00:26:30.382 "uuid": "8ec9dad8-13c0-41a9-a9f6-bc0894bddec0", 00:26:30.382 "is_configured": true, 00:26:30.382 "data_offset": 2048, 00:26:30.382 "data_size": 63488 00:26:30.382 } 00:26:30.382 ] 00:26:30.382 }' 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:30.382 13:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.948 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.948 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.948 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.948 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:30.948 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.948 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:30.948 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:31.207 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.207 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.207 [2024-10-28 13:37:45.111380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:31.207 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:31.208 "name": "Existed_Raid", 00:26:31.208 "uuid": "cd60cf79-02d6-4d04-ab20-bf18a9297b94", 00:26:31.208 "strip_size_kb": 0, 00:26:31.208 "state": "configuring", 00:26:31.208 "raid_level": "raid1", 00:26:31.208 "superblock": true, 00:26:31.208 "num_base_bdevs": 3, 00:26:31.208 "num_base_bdevs_discovered": 2, 00:26:31.208 "num_base_bdevs_operational": 3, 00:26:31.208 "base_bdevs_list": [ 00:26:31.208 { 00:26:31.208 "name": null, 00:26:31.208 "uuid": "c99fcb68-65cb-4aa7-b017-9891d5ae6f3c", 00:26:31.208 "is_configured": false, 00:26:31.208 "data_offset": 0, 00:26:31.208 "data_size": 63488 00:26:31.208 }, 00:26:31.208 { 00:26:31.208 "name": "BaseBdev2", 00:26:31.208 "uuid": "b1e33f5d-60e2-4cc7-9c6b-5a19985c14d3", 00:26:31.208 "is_configured": true, 00:26:31.208 "data_offset": 2048, 00:26:31.208 "data_size": 63488 00:26:31.208 }, 00:26:31.208 { 00:26:31.208 "name": "BaseBdev3", 00:26:31.208 "uuid": "8ec9dad8-13c0-41a9-a9f6-bc0894bddec0", 00:26:31.208 "is_configured": true, 00:26:31.208 "data_offset": 2048, 00:26:31.208 "data_size": 63488 00:26:31.208 } 00:26:31.208 ] 00:26:31.208 }' 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:31.208 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.466 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.467 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.467 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.467 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c99fcb68-65cb-4aa7-b017-9891d5ae6f3c 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.726 [2024-10-28 13:37:45.732755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:31.726 [2024-10-28 13:37:45.732998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:31.726 [2024-10-28 13:37:45.733022] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:31.726 [2024-10-28 13:37:45.733355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:26:31.726 NewBaseBdev 00:26:31.726 [2024-10-28 13:37:45.733531] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:31.726 [2024-10-28 13:37:45.733547] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:31.726 [2024-10-28 13:37:45.733680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.726 [ 00:26:31.726 { 00:26:31.726 "name": "NewBaseBdev", 00:26:31.726 "aliases": [ 00:26:31.726 "c99fcb68-65cb-4aa7-b017-9891d5ae6f3c" 00:26:31.726 ], 00:26:31.726 "product_name": "Malloc disk", 00:26:31.726 "block_size": 512, 00:26:31.726 "num_blocks": 65536, 00:26:31.726 "uuid": "c99fcb68-65cb-4aa7-b017-9891d5ae6f3c", 00:26:31.726 "assigned_rate_limits": { 00:26:31.726 "rw_ios_per_sec": 0, 00:26:31.726 "rw_mbytes_per_sec": 0, 00:26:31.726 "r_mbytes_per_sec": 0, 00:26:31.726 "w_mbytes_per_sec": 0 00:26:31.726 }, 00:26:31.726 "claimed": true, 00:26:31.726 "claim_type": "exclusive_write", 00:26:31.726 "zoned": false, 00:26:31.726 "supported_io_types": { 00:26:31.726 "read": true, 00:26:31.726 "write": true, 00:26:31.726 "unmap": true, 00:26:31.726 "flush": true, 00:26:31.726 "reset": true, 00:26:31.726 "nvme_admin": false, 00:26:31.726 "nvme_io": false, 00:26:31.726 "nvme_io_md": false, 00:26:31.726 "write_zeroes": true, 00:26:31.726 "zcopy": true, 00:26:31.726 "get_zone_info": false, 00:26:31.726 "zone_management": false, 00:26:31.726 "zone_append": false, 00:26:31.726 "compare": false, 00:26:31.726 "compare_and_write": false, 00:26:31.726 "abort": true, 00:26:31.726 "seek_hole": false, 00:26:31.726 "seek_data": false, 00:26:31.726 "copy": true, 00:26:31.726 "nvme_iov_md": false 00:26:31.726 }, 00:26:31.726 "memory_domains": [ 00:26:31.726 { 00:26:31.726 "dma_device_id": "system", 00:26:31.726 "dma_device_type": 1 00:26:31.726 }, 00:26:31.726 { 00:26:31.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:31.726 "dma_device_type": 2 00:26:31.726 } 00:26:31.726 ], 00:26:31.726 "driver_specific": {} 00:26:31.726 } 00:26:31.726 ] 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:31.726 "name": "Existed_Raid", 00:26:31.726 "uuid": "cd60cf79-02d6-4d04-ab20-bf18a9297b94", 00:26:31.726 "strip_size_kb": 0, 00:26:31.726 "state": "online", 00:26:31.726 "raid_level": "raid1", 00:26:31.726 "superblock": true, 00:26:31.726 "num_base_bdevs": 3, 00:26:31.726 "num_base_bdevs_discovered": 3, 00:26:31.726 "num_base_bdevs_operational": 3, 00:26:31.726 "base_bdevs_list": [ 00:26:31.726 { 00:26:31.726 "name": "NewBaseBdev", 00:26:31.726 "uuid": "c99fcb68-65cb-4aa7-b017-9891d5ae6f3c", 00:26:31.726 "is_configured": true, 00:26:31.726 "data_offset": 2048, 00:26:31.726 "data_size": 63488 00:26:31.726 }, 00:26:31.726 { 00:26:31.726 "name": "BaseBdev2", 00:26:31.726 "uuid": "b1e33f5d-60e2-4cc7-9c6b-5a19985c14d3", 00:26:31.726 "is_configured": true, 00:26:31.726 "data_offset": 2048, 00:26:31.726 "data_size": 63488 00:26:31.726 }, 00:26:31.726 { 00:26:31.726 "name": "BaseBdev3", 00:26:31.726 "uuid": "8ec9dad8-13c0-41a9-a9f6-bc0894bddec0", 00:26:31.726 "is_configured": true, 00:26:31.726 "data_offset": 2048, 00:26:31.726 "data_size": 63488 00:26:31.726 } 00:26:31.726 ] 00:26:31.726 }' 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:31.726 13:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.293 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:32.293 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:32.293 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:32.293 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:32.293 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:32.293 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:32.293 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:32.293 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.293 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.293 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:32.293 [2024-10-28 13:37:46.285376] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:32.293 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.293 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:32.293 "name": "Existed_Raid", 00:26:32.293 "aliases": [ 00:26:32.293 "cd60cf79-02d6-4d04-ab20-bf18a9297b94" 00:26:32.293 ], 00:26:32.293 "product_name": "Raid Volume", 00:26:32.293 "block_size": 512, 00:26:32.293 "num_blocks": 63488, 00:26:32.293 "uuid": "cd60cf79-02d6-4d04-ab20-bf18a9297b94", 00:26:32.293 "assigned_rate_limits": { 00:26:32.293 "rw_ios_per_sec": 0, 00:26:32.293 "rw_mbytes_per_sec": 0, 00:26:32.293 "r_mbytes_per_sec": 0, 00:26:32.293 "w_mbytes_per_sec": 0 00:26:32.293 }, 00:26:32.293 "claimed": false, 00:26:32.293 "zoned": false, 00:26:32.293 "supported_io_types": { 00:26:32.293 "read": true, 00:26:32.293 "write": true, 00:26:32.293 "unmap": false, 00:26:32.293 "flush": false, 00:26:32.293 "reset": true, 00:26:32.293 "nvme_admin": false, 00:26:32.293 "nvme_io": false, 00:26:32.293 "nvme_io_md": false, 00:26:32.293 "write_zeroes": true, 00:26:32.293 "zcopy": false, 00:26:32.293 "get_zone_info": false, 00:26:32.293 "zone_management": false, 00:26:32.293 "zone_append": false, 00:26:32.293 "compare": false, 00:26:32.293 "compare_and_write": false, 00:26:32.293 "abort": false, 00:26:32.293 "seek_hole": false, 00:26:32.293 "seek_data": false, 00:26:32.293 "copy": false, 00:26:32.293 "nvme_iov_md": false 00:26:32.293 }, 00:26:32.293 "memory_domains": [ 00:26:32.293 { 00:26:32.293 "dma_device_id": "system", 00:26:32.293 "dma_device_type": 1 00:26:32.293 }, 00:26:32.293 { 00:26:32.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.293 "dma_device_type": 2 00:26:32.293 }, 00:26:32.293 { 00:26:32.293 "dma_device_id": "system", 00:26:32.293 "dma_device_type": 1 00:26:32.293 }, 00:26:32.293 { 00:26:32.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.293 "dma_device_type": 2 00:26:32.293 }, 00:26:32.293 { 00:26:32.293 "dma_device_id": "system", 00:26:32.293 "dma_device_type": 1 00:26:32.293 }, 00:26:32.293 { 00:26:32.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.293 "dma_device_type": 2 00:26:32.293 } 00:26:32.293 ], 00:26:32.294 "driver_specific": { 00:26:32.294 "raid": { 00:26:32.294 "uuid": "cd60cf79-02d6-4d04-ab20-bf18a9297b94", 00:26:32.294 "strip_size_kb": 0, 00:26:32.294 "state": "online", 00:26:32.294 "raid_level": "raid1", 00:26:32.294 "superblock": true, 00:26:32.294 "num_base_bdevs": 3, 00:26:32.294 "num_base_bdevs_discovered": 3, 00:26:32.294 "num_base_bdevs_operational": 3, 00:26:32.294 "base_bdevs_list": [ 00:26:32.294 { 00:26:32.294 "name": "NewBaseBdev", 00:26:32.294 "uuid": "c99fcb68-65cb-4aa7-b017-9891d5ae6f3c", 00:26:32.294 "is_configured": true, 00:26:32.294 "data_offset": 2048, 00:26:32.294 "data_size": 63488 00:26:32.294 }, 00:26:32.294 { 00:26:32.294 "name": "BaseBdev2", 00:26:32.294 "uuid": "b1e33f5d-60e2-4cc7-9c6b-5a19985c14d3", 00:26:32.294 "is_configured": true, 00:26:32.294 "data_offset": 2048, 00:26:32.294 "data_size": 63488 00:26:32.294 }, 00:26:32.294 { 00:26:32.294 "name": "BaseBdev3", 00:26:32.294 "uuid": "8ec9dad8-13c0-41a9-a9f6-bc0894bddec0", 00:26:32.294 "is_configured": true, 00:26:32.294 "data_offset": 2048, 00:26:32.294 "data_size": 63488 00:26:32.294 } 00:26:32.294 ] 00:26:32.294 } 00:26:32.294 } 00:26:32.294 }' 00:26:32.294 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:32.294 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:32.294 BaseBdev2 00:26:32.294 BaseBdev3' 00:26:32.294 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:32.294 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:32.294 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:32.294 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:32.294 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:32.294 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.294 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:32.552 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:32.553 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.553 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.553 [2024-10-28 13:37:46.605080] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:32.553 [2024-10-28 13:37:46.605119] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:32.553 [2024-10-28 13:37:46.605236] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:32.553 [2024-10-28 13:37:46.605559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:32.553 [2024-10-28 13:37:46.605583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:32.553 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.553 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80818 00:26:32.553 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80818 ']' 00:26:32.553 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80818 00:26:32.553 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:26:32.553 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:32.553 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80818 00:26:32.553 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:32.553 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:32.553 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80818' 00:26:32.553 killing process with pid 80818 00:26:32.553 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80818 00:26:32.553 [2024-10-28 13:37:46.645270] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:32.553 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80818 00:26:32.553 [2024-10-28 13:37:46.677982] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:32.811 13:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:26:32.811 00:26:32.811 real 0m10.464s 00:26:32.811 user 0m18.293s 00:26:32.811 sys 0m1.696s 00:26:32.811 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:32.811 ************************************ 00:26:32.811 END TEST raid_state_function_test_sb 00:26:32.811 ************************************ 00:26:32.811 13:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.811 13:37:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:26:32.811 13:37:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:32.811 13:37:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:32.811 13:37:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:32.811 ************************************ 00:26:32.811 START TEST raid_superblock_test 00:26:32.811 ************************************ 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81442 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81442 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81442 ']' 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:32.811 13:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:33.069 [2024-10-28 13:37:47.049952] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:26:33.069 [2024-10-28 13:37:47.050123] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81442 ] 00:26:33.069 [2024-10-28 13:37:47.193958] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:33.327 [2024-10-28 13:37:47.229173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.327 [2024-10-28 13:37:47.287232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.327 [2024-10-28 13:37:47.349364] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:33.327 [2024-10-28 13:37:47.349423] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.294 malloc1 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.294 [2024-10-28 13:37:48.095552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:34.294 [2024-10-28 13:37:48.095639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.294 [2024-10-28 13:37:48.095678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:34.294 [2024-10-28 13:37:48.095697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.294 [2024-10-28 13:37:48.098706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.294 [2024-10-28 13:37:48.098769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:34.294 pt1 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.294 malloc2 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.294 [2024-10-28 13:37:48.124173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:34.294 [2024-10-28 13:37:48.124248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.294 [2024-10-28 13:37:48.124277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:34.294 [2024-10-28 13:37:48.124291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.294 [2024-10-28 13:37:48.127283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.294 [2024-10-28 13:37:48.127344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:34.294 pt2 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.294 malloc3 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.294 [2024-10-28 13:37:48.152825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:34.294 [2024-10-28 13:37:48.152926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.294 [2024-10-28 13:37:48.152958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:34.294 [2024-10-28 13:37:48.152984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.294 [2024-10-28 13:37:48.156048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.294 [2024-10-28 13:37:48.156108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:34.294 pt3 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.294 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.294 [2024-10-28 13:37:48.160932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:34.294 [2024-10-28 13:37:48.163600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:34.294 [2024-10-28 13:37:48.163712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:34.294 [2024-10-28 13:37:48.163915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:26:34.294 [2024-10-28 13:37:48.163948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:34.294 [2024-10-28 13:37:48.164316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:34.294 [2024-10-28 13:37:48.164553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:26:34.295 [2024-10-28 13:37:48.164580] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:26:34.295 [2024-10-28 13:37:48.164768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:34.295 "name": "raid_bdev1", 00:26:34.295 "uuid": "3c248699-0419-48e2-8ab9-d3761efd8a15", 00:26:34.295 "strip_size_kb": 0, 00:26:34.295 "state": "online", 00:26:34.295 "raid_level": "raid1", 00:26:34.295 "superblock": true, 00:26:34.295 "num_base_bdevs": 3, 00:26:34.295 "num_base_bdevs_discovered": 3, 00:26:34.295 "num_base_bdevs_operational": 3, 00:26:34.295 "base_bdevs_list": [ 00:26:34.295 { 00:26:34.295 "name": "pt1", 00:26:34.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:34.295 "is_configured": true, 00:26:34.295 "data_offset": 2048, 00:26:34.295 "data_size": 63488 00:26:34.295 }, 00:26:34.295 { 00:26:34.295 "name": "pt2", 00:26:34.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:34.295 "is_configured": true, 00:26:34.295 "data_offset": 2048, 00:26:34.295 "data_size": 63488 00:26:34.295 }, 00:26:34.295 { 00:26:34.295 "name": "pt3", 00:26:34.295 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:34.295 "is_configured": true, 00:26:34.295 "data_offset": 2048, 00:26:34.295 "data_size": 63488 00:26:34.295 } 00:26:34.295 ] 00:26:34.295 }' 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:34.295 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.553 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:34.553 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:34.553 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:34.553 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:34.553 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:34.553 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:34.553 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:34.553 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:34.553 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.553 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.553 [2024-10-28 13:37:48.701477] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:34.811 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.811 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:34.811 "name": "raid_bdev1", 00:26:34.811 "aliases": [ 00:26:34.811 "3c248699-0419-48e2-8ab9-d3761efd8a15" 00:26:34.811 ], 00:26:34.811 "product_name": "Raid Volume", 00:26:34.811 "block_size": 512, 00:26:34.811 "num_blocks": 63488, 00:26:34.811 "uuid": "3c248699-0419-48e2-8ab9-d3761efd8a15", 00:26:34.811 "assigned_rate_limits": { 00:26:34.811 "rw_ios_per_sec": 0, 00:26:34.811 "rw_mbytes_per_sec": 0, 00:26:34.811 "r_mbytes_per_sec": 0, 00:26:34.811 "w_mbytes_per_sec": 0 00:26:34.811 }, 00:26:34.811 "claimed": false, 00:26:34.811 "zoned": false, 00:26:34.811 "supported_io_types": { 00:26:34.811 "read": true, 00:26:34.811 "write": true, 00:26:34.811 "unmap": false, 00:26:34.811 "flush": false, 00:26:34.811 "reset": true, 00:26:34.811 "nvme_admin": false, 00:26:34.811 "nvme_io": false, 00:26:34.811 "nvme_io_md": false, 00:26:34.811 "write_zeroes": true, 00:26:34.811 "zcopy": false, 00:26:34.811 "get_zone_info": false, 00:26:34.811 "zone_management": false, 00:26:34.811 "zone_append": false, 00:26:34.811 "compare": false, 00:26:34.811 "compare_and_write": false, 00:26:34.811 "abort": false, 00:26:34.811 "seek_hole": false, 00:26:34.811 "seek_data": false, 00:26:34.811 "copy": false, 00:26:34.811 "nvme_iov_md": false 00:26:34.811 }, 00:26:34.811 "memory_domains": [ 00:26:34.811 { 00:26:34.811 "dma_device_id": "system", 00:26:34.811 "dma_device_type": 1 00:26:34.811 }, 00:26:34.811 { 00:26:34.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.811 "dma_device_type": 2 00:26:34.811 }, 00:26:34.811 { 00:26:34.811 "dma_device_id": "system", 00:26:34.811 "dma_device_type": 1 00:26:34.811 }, 00:26:34.811 { 00:26:34.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.811 "dma_device_type": 2 00:26:34.811 }, 00:26:34.811 { 00:26:34.811 "dma_device_id": "system", 00:26:34.811 "dma_device_type": 1 00:26:34.811 }, 00:26:34.811 { 00:26:34.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:34.811 "dma_device_type": 2 00:26:34.811 } 00:26:34.811 ], 00:26:34.811 "driver_specific": { 00:26:34.811 "raid": { 00:26:34.811 "uuid": "3c248699-0419-48e2-8ab9-d3761efd8a15", 00:26:34.811 "strip_size_kb": 0, 00:26:34.811 "state": "online", 00:26:34.811 "raid_level": "raid1", 00:26:34.811 "superblock": true, 00:26:34.811 "num_base_bdevs": 3, 00:26:34.811 "num_base_bdevs_discovered": 3, 00:26:34.811 "num_base_bdevs_operational": 3, 00:26:34.811 "base_bdevs_list": [ 00:26:34.811 { 00:26:34.811 "name": "pt1", 00:26:34.811 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:34.811 "is_configured": true, 00:26:34.811 "data_offset": 2048, 00:26:34.811 "data_size": 63488 00:26:34.811 }, 00:26:34.811 { 00:26:34.811 "name": "pt2", 00:26:34.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:34.811 "is_configured": true, 00:26:34.811 "data_offset": 2048, 00:26:34.811 "data_size": 63488 00:26:34.811 }, 00:26:34.811 { 00:26:34.811 "name": "pt3", 00:26:34.811 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:34.811 "is_configured": true, 00:26:34.811 "data_offset": 2048, 00:26:34.811 "data_size": 63488 00:26:34.811 } 00:26:34.811 ] 00:26:34.811 } 00:26:34.811 } 00:26:34.811 }' 00:26:34.811 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:34.811 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:34.811 pt2 00:26:34.811 pt3' 00:26:34.811 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:34.811 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:34.811 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:34.811 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:34.811 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:34.811 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.811 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.811 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.811 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:34.811 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:34.812 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:34.812 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:34.812 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:34.812 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.812 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.812 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.812 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:34.812 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:34.812 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:34.812 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:34.812 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:34.812 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.812 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.812 13:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.070 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:35.070 13:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:35.070 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:35.070 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.070 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.070 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:35.071 [2024-10-28 13:37:49.009573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3c248699-0419-48e2-8ab9-d3761efd8a15 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3c248699-0419-48e2-8ab9-d3761efd8a15 ']' 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.071 [2024-10-28 13:37:49.053172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:35.071 [2024-10-28 13:37:49.053216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:35.071 [2024-10-28 13:37:49.053327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:35.071 [2024-10-28 13:37:49.053460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:35.071 [2024-10-28 13:37:49.053496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.071 [2024-10-28 13:37:49.201293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:35.071 [2024-10-28 13:37:49.203970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:35.071 [2024-10-28 13:37:49.204079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:35.071 [2024-10-28 13:37:49.204187] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:35.071 [2024-10-28 13:37:49.204269] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:35.071 [2024-10-28 13:37:49.204304] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:35.071 [2024-10-28 13:37:49.204331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:35.071 [2024-10-28 13:37:49.204345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:26:35.071 request: 00:26:35.071 { 00:26:35.071 "name": "raid_bdev1", 00:26:35.071 "raid_level": "raid1", 00:26:35.071 "base_bdevs": [ 00:26:35.071 "malloc1", 00:26:35.071 "malloc2", 00:26:35.071 "malloc3" 00:26:35.071 ], 00:26:35.071 "superblock": false, 00:26:35.071 "method": "bdev_raid_create", 00:26:35.071 "req_id": 1 00:26:35.071 } 00:26:35.071 Got JSON-RPC error response 00:26:35.071 response: 00:26:35.071 { 00:26:35.071 "code": -17, 00:26:35.071 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:35.071 } 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.071 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.330 [2024-10-28 13:37:49.273271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:35.330 [2024-10-28 13:37:49.273378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.330 [2024-10-28 13:37:49.273416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:35.330 [2024-10-28 13:37:49.273432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.330 [2024-10-28 13:37:49.276625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.330 [2024-10-28 13:37:49.276671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:35.330 [2024-10-28 13:37:49.276780] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:35.330 [2024-10-28 13:37:49.276880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:35.330 pt1 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:35.330 "name": "raid_bdev1", 00:26:35.330 "uuid": "3c248699-0419-48e2-8ab9-d3761efd8a15", 00:26:35.330 "strip_size_kb": 0, 00:26:35.330 "state": "configuring", 00:26:35.330 "raid_level": "raid1", 00:26:35.330 "superblock": true, 00:26:35.330 "num_base_bdevs": 3, 00:26:35.330 "num_base_bdevs_discovered": 1, 00:26:35.330 "num_base_bdevs_operational": 3, 00:26:35.330 "base_bdevs_list": [ 00:26:35.330 { 00:26:35.330 "name": "pt1", 00:26:35.330 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:35.330 "is_configured": true, 00:26:35.330 "data_offset": 2048, 00:26:35.330 "data_size": 63488 00:26:35.330 }, 00:26:35.330 { 00:26:35.330 "name": null, 00:26:35.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:35.330 "is_configured": false, 00:26:35.330 "data_offset": 2048, 00:26:35.330 "data_size": 63488 00:26:35.330 }, 00:26:35.330 { 00:26:35.330 "name": null, 00:26:35.330 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:35.330 "is_configured": false, 00:26:35.330 "data_offset": 2048, 00:26:35.330 "data_size": 63488 00:26:35.330 } 00:26:35.330 ] 00:26:35.330 }' 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:35.330 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.898 [2024-10-28 13:37:49.781490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:35.898 [2024-10-28 13:37:49.781599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.898 [2024-10-28 13:37:49.781646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:35.898 [2024-10-28 13:37:49.781664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.898 [2024-10-28 13:37:49.782331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.898 [2024-10-28 13:37:49.782373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:35.898 [2024-10-28 13:37:49.782497] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:35.898 [2024-10-28 13:37:49.782533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:35.898 pt2 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.898 [2024-10-28 13:37:49.789515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:35.898 "name": "raid_bdev1", 00:26:35.898 "uuid": "3c248699-0419-48e2-8ab9-d3761efd8a15", 00:26:35.898 "strip_size_kb": 0, 00:26:35.898 "state": "configuring", 00:26:35.898 "raid_level": "raid1", 00:26:35.898 "superblock": true, 00:26:35.898 "num_base_bdevs": 3, 00:26:35.898 "num_base_bdevs_discovered": 1, 00:26:35.898 "num_base_bdevs_operational": 3, 00:26:35.898 "base_bdevs_list": [ 00:26:35.898 { 00:26:35.898 "name": "pt1", 00:26:35.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:35.898 "is_configured": true, 00:26:35.898 "data_offset": 2048, 00:26:35.898 "data_size": 63488 00:26:35.898 }, 00:26:35.898 { 00:26:35.898 "name": null, 00:26:35.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:35.898 "is_configured": false, 00:26:35.898 "data_offset": 0, 00:26:35.898 "data_size": 63488 00:26:35.898 }, 00:26:35.898 { 00:26:35.898 "name": null, 00:26:35.898 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:35.898 "is_configured": false, 00:26:35.898 "data_offset": 2048, 00:26:35.898 "data_size": 63488 00:26:35.898 } 00:26:35.898 ] 00:26:35.898 }' 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:35.898 13:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.157 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:36.157 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:36.157 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:36.157 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.157 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.415 [2024-10-28 13:37:50.317650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:36.415 [2024-10-28 13:37:50.317759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:36.415 [2024-10-28 13:37:50.317792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:36.415 [2024-10-28 13:37:50.317826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:36.415 [2024-10-28 13:37:50.318456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:36.415 [2024-10-28 13:37:50.318496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:36.415 [2024-10-28 13:37:50.318611] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:36.415 [2024-10-28 13:37:50.318666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:36.415 pt2 00:26:36.415 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.415 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:36.415 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:36.415 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:36.415 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.415 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.415 [2024-10-28 13:37:50.325544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:36.415 [2024-10-28 13:37:50.325610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:36.415 [2024-10-28 13:37:50.325635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:36.415 [2024-10-28 13:37:50.325653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:36.415 [2024-10-28 13:37:50.326161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:36.415 [2024-10-28 13:37:50.326205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:36.415 [2024-10-28 13:37:50.326293] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:36.415 [2024-10-28 13:37:50.326330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:36.415 [2024-10-28 13:37:50.326472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:26:36.415 [2024-10-28 13:37:50.326501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:36.415 [2024-10-28 13:37:50.326842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:26:36.415 [2024-10-28 13:37:50.327025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:26:36.415 [2024-10-28 13:37:50.327048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:26:36.416 [2024-10-28 13:37:50.327216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:36.416 pt3 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:36.416 "name": "raid_bdev1", 00:26:36.416 "uuid": "3c248699-0419-48e2-8ab9-d3761efd8a15", 00:26:36.416 "strip_size_kb": 0, 00:26:36.416 "state": "online", 00:26:36.416 "raid_level": "raid1", 00:26:36.416 "superblock": true, 00:26:36.416 "num_base_bdevs": 3, 00:26:36.416 "num_base_bdevs_discovered": 3, 00:26:36.416 "num_base_bdevs_operational": 3, 00:26:36.416 "base_bdevs_list": [ 00:26:36.416 { 00:26:36.416 "name": "pt1", 00:26:36.416 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:36.416 "is_configured": true, 00:26:36.416 "data_offset": 2048, 00:26:36.416 "data_size": 63488 00:26:36.416 }, 00:26:36.416 { 00:26:36.416 "name": "pt2", 00:26:36.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:36.416 "is_configured": true, 00:26:36.416 "data_offset": 2048, 00:26:36.416 "data_size": 63488 00:26:36.416 }, 00:26:36.416 { 00:26:36.416 "name": "pt3", 00:26:36.416 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:36.416 "is_configured": true, 00:26:36.416 "data_offset": 2048, 00:26:36.416 "data_size": 63488 00:26:36.416 } 00:26:36.416 ] 00:26:36.416 }' 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:36.416 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.673 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:36.673 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:36.673 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:36.673 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:36.673 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:36.673 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:36.673 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:36.673 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.673 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.673 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:36.931 [2024-10-28 13:37:50.834123] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:36.931 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.931 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:36.931 "name": "raid_bdev1", 00:26:36.931 "aliases": [ 00:26:36.931 "3c248699-0419-48e2-8ab9-d3761efd8a15" 00:26:36.931 ], 00:26:36.931 "product_name": "Raid Volume", 00:26:36.931 "block_size": 512, 00:26:36.931 "num_blocks": 63488, 00:26:36.931 "uuid": "3c248699-0419-48e2-8ab9-d3761efd8a15", 00:26:36.931 "assigned_rate_limits": { 00:26:36.931 "rw_ios_per_sec": 0, 00:26:36.931 "rw_mbytes_per_sec": 0, 00:26:36.931 "r_mbytes_per_sec": 0, 00:26:36.931 "w_mbytes_per_sec": 0 00:26:36.931 }, 00:26:36.931 "claimed": false, 00:26:36.931 "zoned": false, 00:26:36.931 "supported_io_types": { 00:26:36.931 "read": true, 00:26:36.931 "write": true, 00:26:36.931 "unmap": false, 00:26:36.931 "flush": false, 00:26:36.931 "reset": true, 00:26:36.931 "nvme_admin": false, 00:26:36.931 "nvme_io": false, 00:26:36.931 "nvme_io_md": false, 00:26:36.931 "write_zeroes": true, 00:26:36.931 "zcopy": false, 00:26:36.931 "get_zone_info": false, 00:26:36.931 "zone_management": false, 00:26:36.931 "zone_append": false, 00:26:36.931 "compare": false, 00:26:36.931 "compare_and_write": false, 00:26:36.931 "abort": false, 00:26:36.931 "seek_hole": false, 00:26:36.931 "seek_data": false, 00:26:36.931 "copy": false, 00:26:36.931 "nvme_iov_md": false 00:26:36.931 }, 00:26:36.931 "memory_domains": [ 00:26:36.931 { 00:26:36.931 "dma_device_id": "system", 00:26:36.931 "dma_device_type": 1 00:26:36.931 }, 00:26:36.931 { 00:26:36.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.931 "dma_device_type": 2 00:26:36.931 }, 00:26:36.931 { 00:26:36.931 "dma_device_id": "system", 00:26:36.931 "dma_device_type": 1 00:26:36.931 }, 00:26:36.931 { 00:26:36.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.931 "dma_device_type": 2 00:26:36.931 }, 00:26:36.931 { 00:26:36.931 "dma_device_id": "system", 00:26:36.931 "dma_device_type": 1 00:26:36.931 }, 00:26:36.931 { 00:26:36.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.931 "dma_device_type": 2 00:26:36.931 } 00:26:36.931 ], 00:26:36.931 "driver_specific": { 00:26:36.931 "raid": { 00:26:36.931 "uuid": "3c248699-0419-48e2-8ab9-d3761efd8a15", 00:26:36.931 "strip_size_kb": 0, 00:26:36.931 "state": "online", 00:26:36.931 "raid_level": "raid1", 00:26:36.931 "superblock": true, 00:26:36.931 "num_base_bdevs": 3, 00:26:36.931 "num_base_bdevs_discovered": 3, 00:26:36.931 "num_base_bdevs_operational": 3, 00:26:36.931 "base_bdevs_list": [ 00:26:36.931 { 00:26:36.931 "name": "pt1", 00:26:36.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:36.932 "is_configured": true, 00:26:36.932 "data_offset": 2048, 00:26:36.932 "data_size": 63488 00:26:36.932 }, 00:26:36.932 { 00:26:36.932 "name": "pt2", 00:26:36.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:36.932 "is_configured": true, 00:26:36.932 "data_offset": 2048, 00:26:36.932 "data_size": 63488 00:26:36.932 }, 00:26:36.932 { 00:26:36.932 "name": "pt3", 00:26:36.932 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:36.932 "is_configured": true, 00:26:36.932 "data_offset": 2048, 00:26:36.932 "data_size": 63488 00:26:36.932 } 00:26:36.932 ] 00:26:36.932 } 00:26:36.932 } 00:26:36.932 }' 00:26:36.932 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:36.932 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:36.932 pt2 00:26:36.932 pt3' 00:26:36.932 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:36.932 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:36.932 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:36.932 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:36.932 13:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:36.932 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.932 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.932 13:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.932 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:36.932 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:36.932 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:36.932 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:36.932 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:36.932 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.932 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.932 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.190 [2024-10-28 13:37:51.150241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3c248699-0419-48e2-8ab9-d3761efd8a15 '!=' 3c248699-0419-48e2-8ab9-d3761efd8a15 ']' 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.190 [2024-10-28 13:37:51.189899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:37.190 "name": "raid_bdev1", 00:26:37.190 "uuid": "3c248699-0419-48e2-8ab9-d3761efd8a15", 00:26:37.190 "strip_size_kb": 0, 00:26:37.190 "state": "online", 00:26:37.190 "raid_level": "raid1", 00:26:37.190 "superblock": true, 00:26:37.190 "num_base_bdevs": 3, 00:26:37.190 "num_base_bdevs_discovered": 2, 00:26:37.190 "num_base_bdevs_operational": 2, 00:26:37.190 "base_bdevs_list": [ 00:26:37.190 { 00:26:37.190 "name": null, 00:26:37.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.190 "is_configured": false, 00:26:37.190 "data_offset": 0, 00:26:37.190 "data_size": 63488 00:26:37.190 }, 00:26:37.190 { 00:26:37.190 "name": "pt2", 00:26:37.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:37.190 "is_configured": true, 00:26:37.190 "data_offset": 2048, 00:26:37.190 "data_size": 63488 00:26:37.190 }, 00:26:37.190 { 00:26:37.190 "name": "pt3", 00:26:37.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:37.190 "is_configured": true, 00:26:37.190 "data_offset": 2048, 00:26:37.190 "data_size": 63488 00:26:37.190 } 00:26:37.190 ] 00:26:37.190 }' 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:37.190 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.758 [2024-10-28 13:37:51.713938] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:37.758 [2024-10-28 13:37:51.713979] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:37.758 [2024-10-28 13:37:51.714124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:37.758 [2024-10-28 13:37:51.714240] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:37.758 [2024-10-28 13:37:51.714265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.758 [2024-10-28 13:37:51.789969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:37.758 [2024-10-28 13:37:51.790065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:37.758 [2024-10-28 13:37:51.790097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:26:37.758 [2024-10-28 13:37:51.790117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:37.758 [2024-10-28 13:37:51.793371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:37.758 [2024-10-28 13:37:51.793438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:37.758 [2024-10-28 13:37:51.793563] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:37.758 [2024-10-28 13:37:51.793633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:37.758 pt2 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:37.758 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:37.759 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.759 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.759 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.759 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.759 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.759 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:37.759 "name": "raid_bdev1", 00:26:37.759 "uuid": "3c248699-0419-48e2-8ab9-d3761efd8a15", 00:26:37.759 "strip_size_kb": 0, 00:26:37.759 "state": "configuring", 00:26:37.759 "raid_level": "raid1", 00:26:37.759 "superblock": true, 00:26:37.759 "num_base_bdevs": 3, 00:26:37.759 "num_base_bdevs_discovered": 1, 00:26:37.759 "num_base_bdevs_operational": 2, 00:26:37.759 "base_bdevs_list": [ 00:26:37.759 { 00:26:37.759 "name": null, 00:26:37.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.759 "is_configured": false, 00:26:37.759 "data_offset": 2048, 00:26:37.759 "data_size": 63488 00:26:37.759 }, 00:26:37.759 { 00:26:37.759 "name": "pt2", 00:26:37.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:37.759 "is_configured": true, 00:26:37.759 "data_offset": 2048, 00:26:37.759 "data_size": 63488 00:26:37.759 }, 00:26:37.759 { 00:26:37.759 "name": null, 00:26:37.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:37.759 "is_configured": false, 00:26:37.759 "data_offset": 2048, 00:26:37.759 "data_size": 63488 00:26:37.759 } 00:26:37.759 ] 00:26:37.759 }' 00:26:37.759 13:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:37.759 13:37:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.324 [2024-10-28 13:37:52.306131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:38.324 [2024-10-28 13:37:52.306259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.324 [2024-10-28 13:37:52.306308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:38.324 [2024-10-28 13:37:52.306332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.324 [2024-10-28 13:37:52.306924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.324 [2024-10-28 13:37:52.306963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:38.324 [2024-10-28 13:37:52.307079] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:38.324 [2024-10-28 13:37:52.307126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:38.324 [2024-10-28 13:37:52.307289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:38.324 [2024-10-28 13:37:52.307312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:38.324 [2024-10-28 13:37:52.307651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:38.324 [2024-10-28 13:37:52.307824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:38.324 [2024-10-28 13:37:52.307839] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:38.324 [2024-10-28 13:37:52.307995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:38.324 pt3 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.324 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:38.324 "name": "raid_bdev1", 00:26:38.324 "uuid": "3c248699-0419-48e2-8ab9-d3761efd8a15", 00:26:38.324 "strip_size_kb": 0, 00:26:38.324 "state": "online", 00:26:38.324 "raid_level": "raid1", 00:26:38.324 "superblock": true, 00:26:38.324 "num_base_bdevs": 3, 00:26:38.324 "num_base_bdevs_discovered": 2, 00:26:38.324 "num_base_bdevs_operational": 2, 00:26:38.324 "base_bdevs_list": [ 00:26:38.324 { 00:26:38.324 "name": null, 00:26:38.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.324 "is_configured": false, 00:26:38.324 "data_offset": 2048, 00:26:38.324 "data_size": 63488 00:26:38.324 }, 00:26:38.324 { 00:26:38.324 "name": "pt2", 00:26:38.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:38.324 "is_configured": true, 00:26:38.324 "data_offset": 2048, 00:26:38.324 "data_size": 63488 00:26:38.324 }, 00:26:38.324 { 00:26:38.324 "name": "pt3", 00:26:38.324 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:38.324 "is_configured": true, 00:26:38.324 "data_offset": 2048, 00:26:38.324 "data_size": 63488 00:26:38.324 } 00:26:38.324 ] 00:26:38.324 }' 00:26:38.325 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:38.325 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.935 [2024-10-28 13:37:52.786241] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:38.935 [2024-10-28 13:37:52.786288] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:38.935 [2024-10-28 13:37:52.786414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:38.935 [2024-10-28 13:37:52.786513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:38.935 [2024-10-28 13:37:52.786531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.935 [2024-10-28 13:37:52.850212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:38.935 [2024-10-28 13:37:52.850311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.935 [2024-10-28 13:37:52.850349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:38.935 [2024-10-28 13:37:52.850372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.935 [2024-10-28 13:37:52.853639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.935 [2024-10-28 13:37:52.853689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:38.935 [2024-10-28 13:37:52.853818] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:38.935 [2024-10-28 13:37:52.853877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:38.935 [2024-10-28 13:37:52.854033] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:38.935 [2024-10-28 13:37:52.854052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:38.935 [2024-10-28 13:37:52.854076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:26:38.935 [2024-10-28 13:37:52.854128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:38.935 pt1 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.935 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:38.935 "name": "raid_bdev1", 00:26:38.935 "uuid": "3c248699-0419-48e2-8ab9-d3761efd8a15", 00:26:38.935 "strip_size_kb": 0, 00:26:38.935 "state": "configuring", 00:26:38.935 "raid_level": "raid1", 00:26:38.935 "superblock": true, 00:26:38.935 "num_base_bdevs": 3, 00:26:38.935 "num_base_bdevs_discovered": 1, 00:26:38.935 "num_base_bdevs_operational": 2, 00:26:38.935 "base_bdevs_list": [ 00:26:38.935 { 00:26:38.935 "name": null, 00:26:38.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:38.935 "is_configured": false, 00:26:38.935 "data_offset": 2048, 00:26:38.935 "data_size": 63488 00:26:38.935 }, 00:26:38.935 { 00:26:38.936 "name": "pt2", 00:26:38.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:38.936 "is_configured": true, 00:26:38.936 "data_offset": 2048, 00:26:38.936 "data_size": 63488 00:26:38.936 }, 00:26:38.936 { 00:26:38.936 "name": null, 00:26:38.936 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:38.936 "is_configured": false, 00:26:38.936 "data_offset": 2048, 00:26:38.936 "data_size": 63488 00:26:38.936 } 00:26:38.936 ] 00:26:38.936 }' 00:26:38.936 13:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:38.936 13:37:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.503 [2024-10-28 13:37:53.434578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:39.503 [2024-10-28 13:37:53.434673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.503 [2024-10-28 13:37:53.434715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:39.503 [2024-10-28 13:37:53.434733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.503 [2024-10-28 13:37:53.435387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.503 [2024-10-28 13:37:53.435432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:39.503 [2024-10-28 13:37:53.435582] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:39.503 [2024-10-28 13:37:53.435650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:39.503 [2024-10-28 13:37:53.435794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:26:39.503 [2024-10-28 13:37:53.435810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:39.503 [2024-10-28 13:37:53.436129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:26:39.503 [2024-10-28 13:37:53.436339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:26:39.503 [2024-10-28 13:37:53.436359] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:26:39.503 [2024-10-28 13:37:53.436502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:39.503 pt3 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.503 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:39.503 "name": "raid_bdev1", 00:26:39.503 "uuid": "3c248699-0419-48e2-8ab9-d3761efd8a15", 00:26:39.503 "strip_size_kb": 0, 00:26:39.503 "state": "online", 00:26:39.503 "raid_level": "raid1", 00:26:39.503 "superblock": true, 00:26:39.503 "num_base_bdevs": 3, 00:26:39.503 "num_base_bdevs_discovered": 2, 00:26:39.503 "num_base_bdevs_operational": 2, 00:26:39.503 "base_bdevs_list": [ 00:26:39.503 { 00:26:39.503 "name": null, 00:26:39.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:39.503 "is_configured": false, 00:26:39.503 "data_offset": 2048, 00:26:39.503 "data_size": 63488 00:26:39.504 }, 00:26:39.504 { 00:26:39.504 "name": "pt2", 00:26:39.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:39.504 "is_configured": true, 00:26:39.504 "data_offset": 2048, 00:26:39.504 "data_size": 63488 00:26:39.504 }, 00:26:39.504 { 00:26:39.504 "name": "pt3", 00:26:39.504 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:39.504 "is_configured": true, 00:26:39.504 "data_offset": 2048, 00:26:39.504 "data_size": 63488 00:26:39.504 } 00:26:39.504 ] 00:26:39.504 }' 00:26:39.504 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:39.504 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.071 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:26:40.071 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.071 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.071 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:40.071 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.071 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:26:40.071 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:40.071 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.071 13:37:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.071 13:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:26:40.071 [2024-10-28 13:37:53.995057] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:40.071 13:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.071 13:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3c248699-0419-48e2-8ab9-d3761efd8a15 '!=' 3c248699-0419-48e2-8ab9-d3761efd8a15 ']' 00:26:40.071 13:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81442 00:26:40.071 13:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81442 ']' 00:26:40.071 13:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81442 00:26:40.071 13:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:26:40.071 13:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:40.071 13:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81442 00:26:40.071 killing process with pid 81442 00:26:40.071 13:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:40.071 13:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:40.071 13:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81442' 00:26:40.071 13:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81442 00:26:40.071 [2024-10-28 13:37:54.085501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:40.071 13:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81442 00:26:40.071 [2024-10-28 13:37:54.085670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:40.071 [2024-10-28 13:37:54.085765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:40.071 [2024-10-28 13:37:54.085786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:26:40.071 [2024-10-28 13:37:54.143639] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:40.330 13:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:26:40.330 00:26:40.330 real 0m7.489s 00:26:40.330 user 0m12.906s 00:26:40.330 sys 0m1.192s 00:26:40.330 ************************************ 00:26:40.330 END TEST raid_superblock_test 00:26:40.330 ************************************ 00:26:40.330 13:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:40.330 13:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.588 13:37:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:26:40.588 13:37:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:40.588 13:37:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:40.588 13:37:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:40.588 ************************************ 00:26:40.588 START TEST raid_read_error_test 00:26:40.588 ************************************ 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QDgXr46Ftf 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81882 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81882 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 81882 ']' 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:40.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.588 13:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.589 13:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:40.589 13:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.589 [2024-10-28 13:37:54.649532] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:26:40.589 [2024-10-28 13:37:54.649787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81882 ] 00:26:40.847 [2024-10-28 13:37:54.815346] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:40.847 [2024-10-28 13:37:54.843813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.847 [2024-10-28 13:37:54.912681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.847 [2024-10-28 13:37:54.989681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:40.847 [2024-10-28 13:37:54.989757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.784 BaseBdev1_malloc 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.784 true 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.784 [2024-10-28 13:37:55.625552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:41.784 [2024-10-28 13:37:55.625636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:41.784 [2024-10-28 13:37:55.625684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:41.784 [2024-10-28 13:37:55.625710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:41.784 [2024-10-28 13:37:55.628877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:41.784 [2024-10-28 13:37:55.628930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:41.784 BaseBdev1 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.784 BaseBdev2_malloc 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.784 true 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.784 [2024-10-28 13:37:55.661278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:41.784 [2024-10-28 13:37:55.661364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:41.784 [2024-10-28 13:37:55.661395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:41.784 [2024-10-28 13:37:55.661414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:41.784 [2024-10-28 13:37:55.664606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:41.784 [2024-10-28 13:37:55.664661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:41.784 BaseBdev2 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.784 BaseBdev3_malloc 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.784 true 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.784 [2024-10-28 13:37:55.697125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:41.784 [2024-10-28 13:37:55.697222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:41.784 [2024-10-28 13:37:55.697253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:41.784 [2024-10-28 13:37:55.697273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:41.784 [2024-10-28 13:37:55.700432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:41.784 [2024-10-28 13:37:55.700492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:41.784 BaseBdev3 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.784 [2024-10-28 13:37:55.705354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:41.784 [2024-10-28 13:37:55.708125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:41.784 [2024-10-28 13:37:55.708280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:41.784 [2024-10-28 13:37:55.708568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:41.784 [2024-10-28 13:37:55.708595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:41.784 [2024-10-28 13:37:55.708987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:41.784 [2024-10-28 13:37:55.709247] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:41.784 [2024-10-28 13:37:55.709273] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:41.784 [2024-10-28 13:37:55.709530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:41.784 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:41.785 "name": "raid_bdev1", 00:26:41.785 "uuid": "bd4d2693-2cee-499c-8b65-ad024a379cf4", 00:26:41.785 "strip_size_kb": 0, 00:26:41.785 "state": "online", 00:26:41.785 "raid_level": "raid1", 00:26:41.785 "superblock": true, 00:26:41.785 "num_base_bdevs": 3, 00:26:41.785 "num_base_bdevs_discovered": 3, 00:26:41.785 "num_base_bdevs_operational": 3, 00:26:41.785 "base_bdevs_list": [ 00:26:41.785 { 00:26:41.785 "name": "BaseBdev1", 00:26:41.785 "uuid": "47199fe2-ec56-52e4-a720-07ba967b320c", 00:26:41.785 "is_configured": true, 00:26:41.785 "data_offset": 2048, 00:26:41.785 "data_size": 63488 00:26:41.785 }, 00:26:41.785 { 00:26:41.785 "name": "BaseBdev2", 00:26:41.785 "uuid": "dca6b281-ec28-533b-b1dd-39154e54ae7d", 00:26:41.785 "is_configured": true, 00:26:41.785 "data_offset": 2048, 00:26:41.785 "data_size": 63488 00:26:41.785 }, 00:26:41.785 { 00:26:41.785 "name": "BaseBdev3", 00:26:41.785 "uuid": "f8c75e86-b596-5d09-9580-ef022441dc74", 00:26:41.785 "is_configured": true, 00:26:41.785 "data_offset": 2048, 00:26:41.785 "data_size": 63488 00:26:41.785 } 00:26:41.785 ] 00:26:41.785 }' 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:41.785 13:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.352 13:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:42.352 13:37:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:42.352 [2024-10-28 13:37:56.350396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:43.288 "name": "raid_bdev1", 00:26:43.288 "uuid": "bd4d2693-2cee-499c-8b65-ad024a379cf4", 00:26:43.288 "strip_size_kb": 0, 00:26:43.288 "state": "online", 00:26:43.288 "raid_level": "raid1", 00:26:43.288 "superblock": true, 00:26:43.288 "num_base_bdevs": 3, 00:26:43.288 "num_base_bdevs_discovered": 3, 00:26:43.288 "num_base_bdevs_operational": 3, 00:26:43.288 "base_bdevs_list": [ 00:26:43.288 { 00:26:43.288 "name": "BaseBdev1", 00:26:43.288 "uuid": "47199fe2-ec56-52e4-a720-07ba967b320c", 00:26:43.288 "is_configured": true, 00:26:43.288 "data_offset": 2048, 00:26:43.288 "data_size": 63488 00:26:43.288 }, 00:26:43.288 { 00:26:43.288 "name": "BaseBdev2", 00:26:43.288 "uuid": "dca6b281-ec28-533b-b1dd-39154e54ae7d", 00:26:43.288 "is_configured": true, 00:26:43.288 "data_offset": 2048, 00:26:43.288 "data_size": 63488 00:26:43.288 }, 00:26:43.288 { 00:26:43.288 "name": "BaseBdev3", 00:26:43.288 "uuid": "f8c75e86-b596-5d09-9580-ef022441dc74", 00:26:43.288 "is_configured": true, 00:26:43.288 "data_offset": 2048, 00:26:43.288 "data_size": 63488 00:26:43.288 } 00:26:43.288 ] 00:26:43.288 }' 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:43.288 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.919 [2024-10-28 13:37:57.759449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:43.919 [2024-10-28 13:37:57.759501] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:43.919 [2024-10-28 13:37:57.762723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:43.919 [2024-10-28 13:37:57.762791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:43.919 [2024-10-28 13:37:57.762936] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:43.919 [2024-10-28 13:37:57.762953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:43.919 { 00:26:43.919 "results": [ 00:26:43.919 { 00:26:43.919 "job": "raid_bdev1", 00:26:43.919 "core_mask": "0x1", 00:26:43.919 "workload": "randrw", 00:26:43.919 "percentage": 50, 00:26:43.919 "status": "finished", 00:26:43.919 "queue_depth": 1, 00:26:43.919 "io_size": 131072, 00:26:43.919 "runtime": 1.40631, 00:26:43.919 "iops": 8632.520568011321, 00:26:43.919 "mibps": 1079.0650710014152, 00:26:43.919 "io_failed": 0, 00:26:43.919 "io_timeout": 0, 00:26:43.919 "avg_latency_us": 111.44114782087765, 00:26:43.919 "min_latency_us": 43.75272727272727, 00:26:43.919 "max_latency_us": 1899.0545454545454 00:26:43.919 } 00:26:43.919 ], 00:26:43.919 "core_count": 1 00:26:43.919 } 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81882 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 81882 ']' 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 81882 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81882 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:43.919 killing process with pid 81882 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81882' 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 81882 00:26:43.919 [2024-10-28 13:37:57.804042] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:43.919 13:37:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 81882 00:26:43.919 [2024-10-28 13:37:57.837254] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:44.178 13:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QDgXr46Ftf 00:26:44.178 13:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:44.178 13:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:44.178 13:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:26:44.178 13:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:26:44.178 13:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:44.178 13:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:44.178 13:37:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:44.178 00:26:44.178 real 0m3.576s 00:26:44.178 user 0m4.661s 00:26:44.178 sys 0m0.631s 00:26:44.178 13:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:44.178 ************************************ 00:26:44.178 END TEST raid_read_error_test 00:26:44.178 ************************************ 00:26:44.178 13:37:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.178 13:37:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:26:44.178 13:37:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:44.178 13:37:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:44.178 13:37:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:44.178 ************************************ 00:26:44.178 START TEST raid_write_error_test 00:26:44.178 ************************************ 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5Cf3cv7VCv 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82018 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82018 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82018 ']' 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:44.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:44.178 13:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.178 [2024-10-28 13:37:58.268360] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:26:44.178 [2024-10-28 13:37:58.268595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82018 ] 00:26:44.436 [2024-10-28 13:37:58.425002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:44.436 [2024-10-28 13:37:58.457307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.436 [2024-10-28 13:37:58.510876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.436 [2024-10-28 13:37:58.568272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:44.436 [2024-10-28 13:37:58.568315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.372 BaseBdev1_malloc 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.372 true 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.372 [2024-10-28 13:37:59.299924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:45.372 [2024-10-28 13:37:59.300018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:45.372 [2024-10-28 13:37:59.300046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:45.372 [2024-10-28 13:37:59.300077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:45.372 [2024-10-28 13:37:59.302970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:45.372 [2024-10-28 13:37:59.303018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:45.372 BaseBdev1 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.372 BaseBdev2_malloc 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.372 true 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.372 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.373 [2024-10-28 13:37:59.343799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:45.373 [2024-10-28 13:37:59.343885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:45.373 [2024-10-28 13:37:59.343913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:45.373 [2024-10-28 13:37:59.343931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:45.373 [2024-10-28 13:37:59.346815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:45.373 [2024-10-28 13:37:59.346866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:45.373 BaseBdev2 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.373 BaseBdev3_malloc 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.373 true 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.373 [2024-10-28 13:37:59.387903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:45.373 [2024-10-28 13:37:59.387996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:45.373 [2024-10-28 13:37:59.388026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:45.373 [2024-10-28 13:37:59.388044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:45.373 [2024-10-28 13:37:59.390960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:45.373 [2024-10-28 13:37:59.391010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:45.373 BaseBdev3 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.373 [2024-10-28 13:37:59.395960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:45.373 [2024-10-28 13:37:59.398504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:45.373 [2024-10-28 13:37:59.398621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:45.373 [2024-10-28 13:37:59.398888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:45.373 [2024-10-28 13:37:59.398921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:45.373 [2024-10-28 13:37:59.399281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:45.373 [2024-10-28 13:37:59.399510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:45.373 [2024-10-28 13:37:59.399552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:45.373 [2024-10-28 13:37:59.399737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:45.373 "name": "raid_bdev1", 00:26:45.373 "uuid": "77caad28-b264-40c1-9828-5ff7dcc94675", 00:26:45.373 "strip_size_kb": 0, 00:26:45.373 "state": "online", 00:26:45.373 "raid_level": "raid1", 00:26:45.373 "superblock": true, 00:26:45.373 "num_base_bdevs": 3, 00:26:45.373 "num_base_bdevs_discovered": 3, 00:26:45.373 "num_base_bdevs_operational": 3, 00:26:45.373 "base_bdevs_list": [ 00:26:45.373 { 00:26:45.373 "name": "BaseBdev1", 00:26:45.373 "uuid": "0dfb6084-ba57-577d-817b-b6ea2f7744d4", 00:26:45.373 "is_configured": true, 00:26:45.373 "data_offset": 2048, 00:26:45.373 "data_size": 63488 00:26:45.373 }, 00:26:45.373 { 00:26:45.373 "name": "BaseBdev2", 00:26:45.373 "uuid": "0176a524-617a-51ae-b52d-2e1c9a765d42", 00:26:45.373 "is_configured": true, 00:26:45.373 "data_offset": 2048, 00:26:45.373 "data_size": 63488 00:26:45.373 }, 00:26:45.373 { 00:26:45.373 "name": "BaseBdev3", 00:26:45.373 "uuid": "166eaa6a-cbb9-5b28-8e1d-4071c90d746b", 00:26:45.373 "is_configured": true, 00:26:45.373 "data_offset": 2048, 00:26:45.373 "data_size": 63488 00:26:45.373 } 00:26:45.373 ] 00:26:45.373 }' 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:45.373 13:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.938 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:45.938 13:37:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:45.938 [2024-10-28 13:38:00.008985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.874 [2024-10-28 13:38:00.913039] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:26:46.874 [2024-10-28 13:38:00.913102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:46.874 [2024-10-28 13:38:00.913428] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:46.874 "name": "raid_bdev1", 00:26:46.874 "uuid": "77caad28-b264-40c1-9828-5ff7dcc94675", 00:26:46.874 "strip_size_kb": 0, 00:26:46.874 "state": "online", 00:26:46.874 "raid_level": "raid1", 00:26:46.874 "superblock": true, 00:26:46.874 "num_base_bdevs": 3, 00:26:46.874 "num_base_bdevs_discovered": 2, 00:26:46.874 "num_base_bdevs_operational": 2, 00:26:46.874 "base_bdevs_list": [ 00:26:46.874 { 00:26:46.874 "name": null, 00:26:46.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.874 "is_configured": false, 00:26:46.874 "data_offset": 0, 00:26:46.874 "data_size": 63488 00:26:46.874 }, 00:26:46.874 { 00:26:46.874 "name": "BaseBdev2", 00:26:46.874 "uuid": "0176a524-617a-51ae-b52d-2e1c9a765d42", 00:26:46.874 "is_configured": true, 00:26:46.874 "data_offset": 2048, 00:26:46.874 "data_size": 63488 00:26:46.874 }, 00:26:46.874 { 00:26:46.874 "name": "BaseBdev3", 00:26:46.874 "uuid": "166eaa6a-cbb9-5b28-8e1d-4071c90d746b", 00:26:46.874 "is_configured": true, 00:26:46.874 "data_offset": 2048, 00:26:46.874 "data_size": 63488 00:26:46.874 } 00:26:46.874 ] 00:26:46.874 }' 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:46.874 13:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.442 [2024-10-28 13:38:01.459583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:47.442 [2024-10-28 13:38:01.459636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:47.442 [2024-10-28 13:38:01.463280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:47.442 [2024-10-28 13:38:01.463360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:47.442 [2024-10-28 13:38:01.463572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:47.442 [2024-10-28 13:38:01.463605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:47.442 { 00:26:47.442 "results": [ 00:26:47.442 { 00:26:47.442 "job": "raid_bdev1", 00:26:47.442 "core_mask": "0x1", 00:26:47.442 "workload": "randrw", 00:26:47.442 "percentage": 50, 00:26:47.442 "status": "finished", 00:26:47.442 "queue_depth": 1, 00:26:47.442 "io_size": 131072, 00:26:47.442 "runtime": 1.447503, 00:26:47.442 "iops": 9546.094204986104, 00:26:47.442 "mibps": 1193.261775623263, 00:26:47.442 "io_failed": 0, 00:26:47.442 "io_timeout": 0, 00:26:47.442 "avg_latency_us": 100.50860669219333, 00:26:47.442 "min_latency_us": 40.49454545454545, 00:26:47.442 "max_latency_us": 1906.5018181818182 00:26:47.442 } 00:26:47.442 ], 00:26:47.442 "core_count": 1 00:26:47.442 } 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82018 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82018 ']' 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82018 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82018 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:47.442 killing process with pid 82018 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82018' 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82018 00:26:47.442 [2024-10-28 13:38:01.504910] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:47.442 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82018 00:26:47.442 [2024-10-28 13:38:01.554109] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:47.728 13:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5Cf3cv7VCv 00:26:47.728 13:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:47.728 13:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:47.728 13:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:26:47.989 13:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:26:47.989 13:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:47.989 13:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:47.989 13:38:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:47.989 00:26:47.989 real 0m3.743s 00:26:47.989 user 0m4.877s 00:26:47.989 sys 0m0.602s 00:26:47.989 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:47.989 13:38:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.989 ************************************ 00:26:47.989 END TEST raid_write_error_test 00:26:47.989 ************************************ 00:26:47.989 13:38:01 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:26:47.989 13:38:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:26:47.989 13:38:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:26:47.989 13:38:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:47.989 13:38:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:47.989 13:38:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:47.989 ************************************ 00:26:47.989 START TEST raid_state_function_test 00:26:47.989 ************************************ 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:26:47.989 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:26:47.990 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82145 00:26:47.990 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82145' 00:26:47.990 Process raid pid: 82145 00:26:47.990 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:47.990 13:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82145 00:26:47.990 13:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82145 ']' 00:26:47.990 13:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.990 13:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:47.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.990 13:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.990 13:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:47.990 13:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.990 [2024-10-28 13:38:02.057939] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:26:47.990 [2024-10-28 13:38:02.058125] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.248 [2024-10-28 13:38:02.214542] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:48.248 [2024-10-28 13:38:02.239794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.248 [2024-10-28 13:38:02.311750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.248 [2024-10-28 13:38:02.393196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:48.248 [2024-10-28 13:38:02.393277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:49.182 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:49.182 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:26:49.182 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:49.182 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.182 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.182 [2024-10-28 13:38:03.059202] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:49.182 [2024-10-28 13:38:03.059276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:49.182 [2024-10-28 13:38:03.059297] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:49.182 [2024-10-28 13:38:03.059310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:49.182 [2024-10-28 13:38:03.059327] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:49.182 [2024-10-28 13:38:03.059339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:49.182 [2024-10-28 13:38:03.059352] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:49.182 [2024-10-28 13:38:03.059364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:49.182 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.182 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:49.182 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:49.182 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:49.182 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:49.182 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:49.182 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:49.182 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:49.182 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:49.183 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:49.183 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:49.183 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.183 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:49.183 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.183 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.183 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.183 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:49.183 "name": "Existed_Raid", 00:26:49.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.183 "strip_size_kb": 64, 00:26:49.183 "state": "configuring", 00:26:49.183 "raid_level": "raid0", 00:26:49.183 "superblock": false, 00:26:49.183 "num_base_bdevs": 4, 00:26:49.183 "num_base_bdevs_discovered": 0, 00:26:49.183 "num_base_bdevs_operational": 4, 00:26:49.183 "base_bdevs_list": [ 00:26:49.183 { 00:26:49.183 "name": "BaseBdev1", 00:26:49.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.183 "is_configured": false, 00:26:49.183 "data_offset": 0, 00:26:49.183 "data_size": 0 00:26:49.183 }, 00:26:49.183 { 00:26:49.183 "name": "BaseBdev2", 00:26:49.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.183 "is_configured": false, 00:26:49.183 "data_offset": 0, 00:26:49.183 "data_size": 0 00:26:49.183 }, 00:26:49.183 { 00:26:49.183 "name": "BaseBdev3", 00:26:49.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.183 "is_configured": false, 00:26:49.183 "data_offset": 0, 00:26:49.183 "data_size": 0 00:26:49.183 }, 00:26:49.183 { 00:26:49.183 "name": "BaseBdev4", 00:26:49.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.183 "is_configured": false, 00:26:49.183 "data_offset": 0, 00:26:49.183 "data_size": 0 00:26:49.183 } 00:26:49.183 ] 00:26:49.183 }' 00:26:49.183 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:49.183 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.441 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:49.441 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.441 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.441 [2024-10-28 13:38:03.579284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:49.441 [2024-10-28 13:38:03.579337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:26:49.441 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.441 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:49.441 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.441 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.441 [2024-10-28 13:38:03.587285] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:49.441 [2024-10-28 13:38:03.587345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:49.441 [2024-10-28 13:38:03.587364] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:49.441 [2024-10-28 13:38:03.587377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:49.441 [2024-10-28 13:38:03.587390] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:49.441 [2024-10-28 13:38:03.587402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:49.441 [2024-10-28 13:38:03.587414] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:49.441 [2024-10-28 13:38:03.587426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:49.441 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.441 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:49.441 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.441 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.699 [2024-10-28 13:38:03.611893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:49.699 BaseBdev1 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.699 [ 00:26:49.699 { 00:26:49.699 "name": "BaseBdev1", 00:26:49.699 "aliases": [ 00:26:49.699 "5d59b121-21d8-42d8-b970-8e814df97b3a" 00:26:49.699 ], 00:26:49.699 "product_name": "Malloc disk", 00:26:49.699 "block_size": 512, 00:26:49.699 "num_blocks": 65536, 00:26:49.699 "uuid": "5d59b121-21d8-42d8-b970-8e814df97b3a", 00:26:49.699 "assigned_rate_limits": { 00:26:49.699 "rw_ios_per_sec": 0, 00:26:49.699 "rw_mbytes_per_sec": 0, 00:26:49.699 "r_mbytes_per_sec": 0, 00:26:49.699 "w_mbytes_per_sec": 0 00:26:49.699 }, 00:26:49.699 "claimed": true, 00:26:49.699 "claim_type": "exclusive_write", 00:26:49.699 "zoned": false, 00:26:49.699 "supported_io_types": { 00:26:49.699 "read": true, 00:26:49.699 "write": true, 00:26:49.699 "unmap": true, 00:26:49.699 "flush": true, 00:26:49.699 "reset": true, 00:26:49.699 "nvme_admin": false, 00:26:49.699 "nvme_io": false, 00:26:49.699 "nvme_io_md": false, 00:26:49.699 "write_zeroes": true, 00:26:49.699 "zcopy": true, 00:26:49.699 "get_zone_info": false, 00:26:49.699 "zone_management": false, 00:26:49.699 "zone_append": false, 00:26:49.699 "compare": false, 00:26:49.699 "compare_and_write": false, 00:26:49.699 "abort": true, 00:26:49.699 "seek_hole": false, 00:26:49.699 "seek_data": false, 00:26:49.699 "copy": true, 00:26:49.699 "nvme_iov_md": false 00:26:49.699 }, 00:26:49.699 "memory_domains": [ 00:26:49.699 { 00:26:49.699 "dma_device_id": "system", 00:26:49.699 "dma_device_type": 1 00:26:49.699 }, 00:26:49.699 { 00:26:49.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.699 "dma_device_type": 2 00:26:49.699 } 00:26:49.699 ], 00:26:49.699 "driver_specific": {} 00:26:49.699 } 00:26:49.699 ] 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:49.699 "name": "Existed_Raid", 00:26:49.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.699 "strip_size_kb": 64, 00:26:49.699 "state": "configuring", 00:26:49.699 "raid_level": "raid0", 00:26:49.699 "superblock": false, 00:26:49.699 "num_base_bdevs": 4, 00:26:49.699 "num_base_bdevs_discovered": 1, 00:26:49.699 "num_base_bdevs_operational": 4, 00:26:49.699 "base_bdevs_list": [ 00:26:49.699 { 00:26:49.699 "name": "BaseBdev1", 00:26:49.699 "uuid": "5d59b121-21d8-42d8-b970-8e814df97b3a", 00:26:49.699 "is_configured": true, 00:26:49.699 "data_offset": 0, 00:26:49.699 "data_size": 65536 00:26:49.699 }, 00:26:49.699 { 00:26:49.699 "name": "BaseBdev2", 00:26:49.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.699 "is_configured": false, 00:26:49.699 "data_offset": 0, 00:26:49.699 "data_size": 0 00:26:49.699 }, 00:26:49.699 { 00:26:49.699 "name": "BaseBdev3", 00:26:49.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.699 "is_configured": false, 00:26:49.699 "data_offset": 0, 00:26:49.699 "data_size": 0 00:26:49.699 }, 00:26:49.699 { 00:26:49.699 "name": "BaseBdev4", 00:26:49.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.699 "is_configured": false, 00:26:49.699 "data_offset": 0, 00:26:49.699 "data_size": 0 00:26:49.699 } 00:26:49.699 ] 00:26:49.699 }' 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:49.699 13:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.265 [2024-10-28 13:38:04.168168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:50.265 [2024-10-28 13:38:04.168268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.265 [2024-10-28 13:38:04.176277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:50.265 [2024-10-28 13:38:04.179071] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:50.265 [2024-10-28 13:38:04.179137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:50.265 [2024-10-28 13:38:04.179185] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:50.265 [2024-10-28 13:38:04.179201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:50.265 [2024-10-28 13:38:04.179213] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:50.265 [2024-10-28 13:38:04.179225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:50.265 "name": "Existed_Raid", 00:26:50.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.265 "strip_size_kb": 64, 00:26:50.265 "state": "configuring", 00:26:50.265 "raid_level": "raid0", 00:26:50.265 "superblock": false, 00:26:50.265 "num_base_bdevs": 4, 00:26:50.265 "num_base_bdevs_discovered": 1, 00:26:50.265 "num_base_bdevs_operational": 4, 00:26:50.265 "base_bdevs_list": [ 00:26:50.265 { 00:26:50.265 "name": "BaseBdev1", 00:26:50.265 "uuid": "5d59b121-21d8-42d8-b970-8e814df97b3a", 00:26:50.265 "is_configured": true, 00:26:50.265 "data_offset": 0, 00:26:50.265 "data_size": 65536 00:26:50.265 }, 00:26:50.265 { 00:26:50.265 "name": "BaseBdev2", 00:26:50.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.265 "is_configured": false, 00:26:50.265 "data_offset": 0, 00:26:50.265 "data_size": 0 00:26:50.265 }, 00:26:50.265 { 00:26:50.265 "name": "BaseBdev3", 00:26:50.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.265 "is_configured": false, 00:26:50.265 "data_offset": 0, 00:26:50.265 "data_size": 0 00:26:50.265 }, 00:26:50.265 { 00:26:50.265 "name": "BaseBdev4", 00:26:50.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.265 "is_configured": false, 00:26:50.265 "data_offset": 0, 00:26:50.265 "data_size": 0 00:26:50.265 } 00:26:50.265 ] 00:26:50.265 }' 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:50.265 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.829 [2024-10-28 13:38:04.721725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:50.829 BaseBdev2 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.829 [ 00:26:50.829 { 00:26:50.829 "name": "BaseBdev2", 00:26:50.829 "aliases": [ 00:26:50.829 "af7e9e77-cf45-4dd3-8bd1-7d7ce7c59d8e" 00:26:50.829 ], 00:26:50.829 "product_name": "Malloc disk", 00:26:50.829 "block_size": 512, 00:26:50.829 "num_blocks": 65536, 00:26:50.829 "uuid": "af7e9e77-cf45-4dd3-8bd1-7d7ce7c59d8e", 00:26:50.829 "assigned_rate_limits": { 00:26:50.829 "rw_ios_per_sec": 0, 00:26:50.829 "rw_mbytes_per_sec": 0, 00:26:50.829 "r_mbytes_per_sec": 0, 00:26:50.829 "w_mbytes_per_sec": 0 00:26:50.829 }, 00:26:50.829 "claimed": true, 00:26:50.829 "claim_type": "exclusive_write", 00:26:50.829 "zoned": false, 00:26:50.829 "supported_io_types": { 00:26:50.829 "read": true, 00:26:50.829 "write": true, 00:26:50.829 "unmap": true, 00:26:50.829 "flush": true, 00:26:50.829 "reset": true, 00:26:50.829 "nvme_admin": false, 00:26:50.829 "nvme_io": false, 00:26:50.829 "nvme_io_md": false, 00:26:50.829 "write_zeroes": true, 00:26:50.829 "zcopy": true, 00:26:50.829 "get_zone_info": false, 00:26:50.829 "zone_management": false, 00:26:50.829 "zone_append": false, 00:26:50.829 "compare": false, 00:26:50.829 "compare_and_write": false, 00:26:50.829 "abort": true, 00:26:50.829 "seek_hole": false, 00:26:50.829 "seek_data": false, 00:26:50.829 "copy": true, 00:26:50.829 "nvme_iov_md": false 00:26:50.829 }, 00:26:50.829 "memory_domains": [ 00:26:50.829 { 00:26:50.829 "dma_device_id": "system", 00:26:50.829 "dma_device_type": 1 00:26:50.829 }, 00:26:50.829 { 00:26:50.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:50.829 "dma_device_type": 2 00:26:50.829 } 00:26:50.829 ], 00:26:50.829 "driver_specific": {} 00:26:50.829 } 00:26:50.829 ] 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.829 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:50.829 "name": "Existed_Raid", 00:26:50.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.829 "strip_size_kb": 64, 00:26:50.829 "state": "configuring", 00:26:50.829 "raid_level": "raid0", 00:26:50.829 "superblock": false, 00:26:50.829 "num_base_bdevs": 4, 00:26:50.829 "num_base_bdevs_discovered": 2, 00:26:50.829 "num_base_bdevs_operational": 4, 00:26:50.829 "base_bdevs_list": [ 00:26:50.829 { 00:26:50.829 "name": "BaseBdev1", 00:26:50.829 "uuid": "5d59b121-21d8-42d8-b970-8e814df97b3a", 00:26:50.829 "is_configured": true, 00:26:50.829 "data_offset": 0, 00:26:50.829 "data_size": 65536 00:26:50.829 }, 00:26:50.829 { 00:26:50.829 "name": "BaseBdev2", 00:26:50.829 "uuid": "af7e9e77-cf45-4dd3-8bd1-7d7ce7c59d8e", 00:26:50.829 "is_configured": true, 00:26:50.829 "data_offset": 0, 00:26:50.829 "data_size": 65536 00:26:50.829 }, 00:26:50.829 { 00:26:50.829 "name": "BaseBdev3", 00:26:50.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.829 "is_configured": false, 00:26:50.829 "data_offset": 0, 00:26:50.829 "data_size": 0 00:26:50.829 }, 00:26:50.829 { 00:26:50.829 "name": "BaseBdev4", 00:26:50.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.829 "is_configured": false, 00:26:50.829 "data_offset": 0, 00:26:50.829 "data_size": 0 00:26:50.829 } 00:26:50.829 ] 00:26:50.829 }' 00:26:50.830 13:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:50.830 13:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.395 [2024-10-28 13:38:05.304476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:51.395 BaseBdev3 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.395 [ 00:26:51.395 { 00:26:51.395 "name": "BaseBdev3", 00:26:51.395 "aliases": [ 00:26:51.395 "a4e63cef-9dc7-44e1-9892-2ddcf8207668" 00:26:51.395 ], 00:26:51.395 "product_name": "Malloc disk", 00:26:51.395 "block_size": 512, 00:26:51.395 "num_blocks": 65536, 00:26:51.395 "uuid": "a4e63cef-9dc7-44e1-9892-2ddcf8207668", 00:26:51.395 "assigned_rate_limits": { 00:26:51.395 "rw_ios_per_sec": 0, 00:26:51.395 "rw_mbytes_per_sec": 0, 00:26:51.395 "r_mbytes_per_sec": 0, 00:26:51.395 "w_mbytes_per_sec": 0 00:26:51.395 }, 00:26:51.395 "claimed": true, 00:26:51.395 "claim_type": "exclusive_write", 00:26:51.395 "zoned": false, 00:26:51.395 "supported_io_types": { 00:26:51.395 "read": true, 00:26:51.395 "write": true, 00:26:51.395 "unmap": true, 00:26:51.395 "flush": true, 00:26:51.395 "reset": true, 00:26:51.395 "nvme_admin": false, 00:26:51.395 "nvme_io": false, 00:26:51.395 "nvme_io_md": false, 00:26:51.395 "write_zeroes": true, 00:26:51.395 "zcopy": true, 00:26:51.395 "get_zone_info": false, 00:26:51.395 "zone_management": false, 00:26:51.395 "zone_append": false, 00:26:51.395 "compare": false, 00:26:51.395 "compare_and_write": false, 00:26:51.395 "abort": true, 00:26:51.395 "seek_hole": false, 00:26:51.395 "seek_data": false, 00:26:51.395 "copy": true, 00:26:51.395 "nvme_iov_md": false 00:26:51.395 }, 00:26:51.395 "memory_domains": [ 00:26:51.395 { 00:26:51.395 "dma_device_id": "system", 00:26:51.395 "dma_device_type": 1 00:26:51.395 }, 00:26:51.395 { 00:26:51.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.395 "dma_device_type": 2 00:26:51.395 } 00:26:51.395 ], 00:26:51.395 "driver_specific": {} 00:26:51.395 } 00:26:51.395 ] 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:51.395 "name": "Existed_Raid", 00:26:51.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.395 "strip_size_kb": 64, 00:26:51.395 "state": "configuring", 00:26:51.395 "raid_level": "raid0", 00:26:51.395 "superblock": false, 00:26:51.395 "num_base_bdevs": 4, 00:26:51.395 "num_base_bdevs_discovered": 3, 00:26:51.395 "num_base_bdevs_operational": 4, 00:26:51.395 "base_bdevs_list": [ 00:26:51.395 { 00:26:51.395 "name": "BaseBdev1", 00:26:51.395 "uuid": "5d59b121-21d8-42d8-b970-8e814df97b3a", 00:26:51.395 "is_configured": true, 00:26:51.395 "data_offset": 0, 00:26:51.395 "data_size": 65536 00:26:51.395 }, 00:26:51.395 { 00:26:51.395 "name": "BaseBdev2", 00:26:51.395 "uuid": "af7e9e77-cf45-4dd3-8bd1-7d7ce7c59d8e", 00:26:51.395 "is_configured": true, 00:26:51.395 "data_offset": 0, 00:26:51.395 "data_size": 65536 00:26:51.395 }, 00:26:51.395 { 00:26:51.395 "name": "BaseBdev3", 00:26:51.395 "uuid": "a4e63cef-9dc7-44e1-9892-2ddcf8207668", 00:26:51.395 "is_configured": true, 00:26:51.395 "data_offset": 0, 00:26:51.395 "data_size": 65536 00:26:51.395 }, 00:26:51.395 { 00:26:51.395 "name": "BaseBdev4", 00:26:51.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.395 "is_configured": false, 00:26:51.395 "data_offset": 0, 00:26:51.395 "data_size": 0 00:26:51.395 } 00:26:51.395 ] 00:26:51.395 }' 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:51.395 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.963 [2024-10-28 13:38:05.885553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:51.963 [2024-10-28 13:38:05.885613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:26:51.963 [2024-10-28 13:38:05.885633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:26:51.963 [2024-10-28 13:38:05.886027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:26:51.963 [2024-10-28 13:38:05.886280] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:26:51.963 [2024-10-28 13:38:05.886314] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:26:51.963 [2024-10-28 13:38:05.886620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:51.963 BaseBdev4 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.963 [ 00:26:51.963 { 00:26:51.963 "name": "BaseBdev4", 00:26:51.963 "aliases": [ 00:26:51.963 "134d91f5-a812-4ccd-953a-ce012a0691fe" 00:26:51.963 ], 00:26:51.963 "product_name": "Malloc disk", 00:26:51.963 "block_size": 512, 00:26:51.963 "num_blocks": 65536, 00:26:51.963 "uuid": "134d91f5-a812-4ccd-953a-ce012a0691fe", 00:26:51.963 "assigned_rate_limits": { 00:26:51.963 "rw_ios_per_sec": 0, 00:26:51.963 "rw_mbytes_per_sec": 0, 00:26:51.963 "r_mbytes_per_sec": 0, 00:26:51.963 "w_mbytes_per_sec": 0 00:26:51.963 }, 00:26:51.963 "claimed": true, 00:26:51.963 "claim_type": "exclusive_write", 00:26:51.963 "zoned": false, 00:26:51.963 "supported_io_types": { 00:26:51.963 "read": true, 00:26:51.963 "write": true, 00:26:51.963 "unmap": true, 00:26:51.963 "flush": true, 00:26:51.963 "reset": true, 00:26:51.963 "nvme_admin": false, 00:26:51.963 "nvme_io": false, 00:26:51.963 "nvme_io_md": false, 00:26:51.963 "write_zeroes": true, 00:26:51.963 "zcopy": true, 00:26:51.963 "get_zone_info": false, 00:26:51.963 "zone_management": false, 00:26:51.963 "zone_append": false, 00:26:51.963 "compare": false, 00:26:51.963 "compare_and_write": false, 00:26:51.963 "abort": true, 00:26:51.963 "seek_hole": false, 00:26:51.963 "seek_data": false, 00:26:51.963 "copy": true, 00:26:51.963 "nvme_iov_md": false 00:26:51.963 }, 00:26:51.963 "memory_domains": [ 00:26:51.963 { 00:26:51.963 "dma_device_id": "system", 00:26:51.963 "dma_device_type": 1 00:26:51.963 }, 00:26:51.963 { 00:26:51.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:51.963 "dma_device_type": 2 00:26:51.963 } 00:26:51.963 ], 00:26:51.963 "driver_specific": {} 00:26:51.963 } 00:26:51.963 ] 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:51.963 "name": "Existed_Raid", 00:26:51.963 "uuid": "8c573f8f-8891-42b6-bffe-f9206d3f3496", 00:26:51.963 "strip_size_kb": 64, 00:26:51.963 "state": "online", 00:26:51.963 "raid_level": "raid0", 00:26:51.963 "superblock": false, 00:26:51.963 "num_base_bdevs": 4, 00:26:51.963 "num_base_bdevs_discovered": 4, 00:26:51.963 "num_base_bdevs_operational": 4, 00:26:51.963 "base_bdevs_list": [ 00:26:51.963 { 00:26:51.963 "name": "BaseBdev1", 00:26:51.963 "uuid": "5d59b121-21d8-42d8-b970-8e814df97b3a", 00:26:51.963 "is_configured": true, 00:26:51.963 "data_offset": 0, 00:26:51.963 "data_size": 65536 00:26:51.963 }, 00:26:51.963 { 00:26:51.963 "name": "BaseBdev2", 00:26:51.963 "uuid": "af7e9e77-cf45-4dd3-8bd1-7d7ce7c59d8e", 00:26:51.963 "is_configured": true, 00:26:51.963 "data_offset": 0, 00:26:51.963 "data_size": 65536 00:26:51.963 }, 00:26:51.963 { 00:26:51.963 "name": "BaseBdev3", 00:26:51.963 "uuid": "a4e63cef-9dc7-44e1-9892-2ddcf8207668", 00:26:51.963 "is_configured": true, 00:26:51.963 "data_offset": 0, 00:26:51.963 "data_size": 65536 00:26:51.963 }, 00:26:51.963 { 00:26:51.963 "name": "BaseBdev4", 00:26:51.963 "uuid": "134d91f5-a812-4ccd-953a-ce012a0691fe", 00:26:51.963 "is_configured": true, 00:26:51.963 "data_offset": 0, 00:26:51.963 "data_size": 65536 00:26:51.963 } 00:26:51.963 ] 00:26:51.963 }' 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:51.963 13:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.530 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:52.530 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:52.530 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:52.530 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:52.530 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:52.530 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:52.530 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:52.530 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:52.530 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.530 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.530 [2024-10-28 13:38:06.474337] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:52.530 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.530 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:52.530 "name": "Existed_Raid", 00:26:52.530 "aliases": [ 00:26:52.530 "8c573f8f-8891-42b6-bffe-f9206d3f3496" 00:26:52.530 ], 00:26:52.530 "product_name": "Raid Volume", 00:26:52.530 "block_size": 512, 00:26:52.530 "num_blocks": 262144, 00:26:52.530 "uuid": "8c573f8f-8891-42b6-bffe-f9206d3f3496", 00:26:52.530 "assigned_rate_limits": { 00:26:52.531 "rw_ios_per_sec": 0, 00:26:52.531 "rw_mbytes_per_sec": 0, 00:26:52.531 "r_mbytes_per_sec": 0, 00:26:52.531 "w_mbytes_per_sec": 0 00:26:52.531 }, 00:26:52.531 "claimed": false, 00:26:52.531 "zoned": false, 00:26:52.531 "supported_io_types": { 00:26:52.531 "read": true, 00:26:52.531 "write": true, 00:26:52.531 "unmap": true, 00:26:52.531 "flush": true, 00:26:52.531 "reset": true, 00:26:52.531 "nvme_admin": false, 00:26:52.531 "nvme_io": false, 00:26:52.531 "nvme_io_md": false, 00:26:52.531 "write_zeroes": true, 00:26:52.531 "zcopy": false, 00:26:52.531 "get_zone_info": false, 00:26:52.531 "zone_management": false, 00:26:52.531 "zone_append": false, 00:26:52.531 "compare": false, 00:26:52.531 "compare_and_write": false, 00:26:52.531 "abort": false, 00:26:52.531 "seek_hole": false, 00:26:52.531 "seek_data": false, 00:26:52.531 "copy": false, 00:26:52.531 "nvme_iov_md": false 00:26:52.531 }, 00:26:52.531 "memory_domains": [ 00:26:52.531 { 00:26:52.531 "dma_device_id": "system", 00:26:52.531 "dma_device_type": 1 00:26:52.531 }, 00:26:52.531 { 00:26:52.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.531 "dma_device_type": 2 00:26:52.531 }, 00:26:52.531 { 00:26:52.531 "dma_device_id": "system", 00:26:52.531 "dma_device_type": 1 00:26:52.531 }, 00:26:52.531 { 00:26:52.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.531 "dma_device_type": 2 00:26:52.531 }, 00:26:52.531 { 00:26:52.531 "dma_device_id": "system", 00:26:52.531 "dma_device_type": 1 00:26:52.531 }, 00:26:52.531 { 00:26:52.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.531 "dma_device_type": 2 00:26:52.531 }, 00:26:52.531 { 00:26:52.531 "dma_device_id": "system", 00:26:52.531 "dma_device_type": 1 00:26:52.531 }, 00:26:52.531 { 00:26:52.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:52.531 "dma_device_type": 2 00:26:52.531 } 00:26:52.531 ], 00:26:52.531 "driver_specific": { 00:26:52.531 "raid": { 00:26:52.531 "uuid": "8c573f8f-8891-42b6-bffe-f9206d3f3496", 00:26:52.531 "strip_size_kb": 64, 00:26:52.531 "state": "online", 00:26:52.531 "raid_level": "raid0", 00:26:52.531 "superblock": false, 00:26:52.531 "num_base_bdevs": 4, 00:26:52.531 "num_base_bdevs_discovered": 4, 00:26:52.531 "num_base_bdevs_operational": 4, 00:26:52.531 "base_bdevs_list": [ 00:26:52.531 { 00:26:52.531 "name": "BaseBdev1", 00:26:52.531 "uuid": "5d59b121-21d8-42d8-b970-8e814df97b3a", 00:26:52.531 "is_configured": true, 00:26:52.531 "data_offset": 0, 00:26:52.531 "data_size": 65536 00:26:52.531 }, 00:26:52.531 { 00:26:52.531 "name": "BaseBdev2", 00:26:52.531 "uuid": "af7e9e77-cf45-4dd3-8bd1-7d7ce7c59d8e", 00:26:52.531 "is_configured": true, 00:26:52.531 "data_offset": 0, 00:26:52.531 "data_size": 65536 00:26:52.531 }, 00:26:52.531 { 00:26:52.531 "name": "BaseBdev3", 00:26:52.531 "uuid": "a4e63cef-9dc7-44e1-9892-2ddcf8207668", 00:26:52.531 "is_configured": true, 00:26:52.531 "data_offset": 0, 00:26:52.531 "data_size": 65536 00:26:52.531 }, 00:26:52.531 { 00:26:52.531 "name": "BaseBdev4", 00:26:52.531 "uuid": "134d91f5-a812-4ccd-953a-ce012a0691fe", 00:26:52.531 "is_configured": true, 00:26:52.531 "data_offset": 0, 00:26:52.531 "data_size": 65536 00:26:52.531 } 00:26:52.531 ] 00:26:52.531 } 00:26:52.531 } 00:26:52.531 }' 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:52.531 BaseBdev2 00:26:52.531 BaseBdev3 00:26:52.531 BaseBdev4' 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.531 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.790 [2024-10-28 13:38:06.855051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:52.790 [2024-10-28 13:38:06.855110] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:52.790 [2024-10-28 13:38:06.855229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:52.790 "name": "Existed_Raid", 00:26:52.790 "uuid": "8c573f8f-8891-42b6-bffe-f9206d3f3496", 00:26:52.790 "strip_size_kb": 64, 00:26:52.790 "state": "offline", 00:26:52.790 "raid_level": "raid0", 00:26:52.790 "superblock": false, 00:26:52.790 "num_base_bdevs": 4, 00:26:52.790 "num_base_bdevs_discovered": 3, 00:26:52.790 "num_base_bdevs_operational": 3, 00:26:52.790 "base_bdevs_list": [ 00:26:52.790 { 00:26:52.790 "name": null, 00:26:52.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.790 "is_configured": false, 00:26:52.790 "data_offset": 0, 00:26:52.790 "data_size": 65536 00:26:52.790 }, 00:26:52.790 { 00:26:52.790 "name": "BaseBdev2", 00:26:52.790 "uuid": "af7e9e77-cf45-4dd3-8bd1-7d7ce7c59d8e", 00:26:52.790 "is_configured": true, 00:26:52.790 "data_offset": 0, 00:26:52.790 "data_size": 65536 00:26:52.790 }, 00:26:52.790 { 00:26:52.790 "name": "BaseBdev3", 00:26:52.790 "uuid": "a4e63cef-9dc7-44e1-9892-2ddcf8207668", 00:26:52.790 "is_configured": true, 00:26:52.790 "data_offset": 0, 00:26:52.790 "data_size": 65536 00:26:52.790 }, 00:26:52.790 { 00:26:52.790 "name": "BaseBdev4", 00:26:52.790 "uuid": "134d91f5-a812-4ccd-953a-ce012a0691fe", 00:26:52.790 "is_configured": true, 00:26:52.790 "data_offset": 0, 00:26:52.790 "data_size": 65536 00:26:52.790 } 00:26:52.790 ] 00:26:52.790 }' 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:52.790 13:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.356 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:53.356 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:53.356 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.356 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:53.356 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.356 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.356 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.624 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:53.624 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:53.624 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:53.624 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.624 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.624 [2024-10-28 13:38:07.534769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:53.624 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.624 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.625 [2024-10-28 13:38:07.614207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.625 [2024-10-28 13:38:07.690370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:53.625 [2024-10-28 13:38:07.690456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.625 BaseBdev2 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.625 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.885 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.885 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:53.885 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.885 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.885 [ 00:26:53.885 { 00:26:53.885 "name": "BaseBdev2", 00:26:53.885 "aliases": [ 00:26:53.885 "ef9a6631-f26f-4fd1-9177-b6093fbc9bd4" 00:26:53.885 ], 00:26:53.885 "product_name": "Malloc disk", 00:26:53.885 "block_size": 512, 00:26:53.885 "num_blocks": 65536, 00:26:53.885 "uuid": "ef9a6631-f26f-4fd1-9177-b6093fbc9bd4", 00:26:53.885 "assigned_rate_limits": { 00:26:53.885 "rw_ios_per_sec": 0, 00:26:53.885 "rw_mbytes_per_sec": 0, 00:26:53.885 "r_mbytes_per_sec": 0, 00:26:53.885 "w_mbytes_per_sec": 0 00:26:53.885 }, 00:26:53.885 "claimed": false, 00:26:53.885 "zoned": false, 00:26:53.885 "supported_io_types": { 00:26:53.885 "read": true, 00:26:53.885 "write": true, 00:26:53.885 "unmap": true, 00:26:53.885 "flush": true, 00:26:53.885 "reset": true, 00:26:53.885 "nvme_admin": false, 00:26:53.885 "nvme_io": false, 00:26:53.885 "nvme_io_md": false, 00:26:53.885 "write_zeroes": true, 00:26:53.885 "zcopy": true, 00:26:53.885 "get_zone_info": false, 00:26:53.885 "zone_management": false, 00:26:53.885 "zone_append": false, 00:26:53.885 "compare": false, 00:26:53.885 "compare_and_write": false, 00:26:53.885 "abort": true, 00:26:53.885 "seek_hole": false, 00:26:53.885 "seek_data": false, 00:26:53.885 "copy": true, 00:26:53.885 "nvme_iov_md": false 00:26:53.885 }, 00:26:53.885 "memory_domains": [ 00:26:53.885 { 00:26:53.885 "dma_device_id": "system", 00:26:53.885 "dma_device_type": 1 00:26:53.885 }, 00:26:53.885 { 00:26:53.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.885 "dma_device_type": 2 00:26:53.885 } 00:26:53.885 ], 00:26:53.885 "driver_specific": {} 00:26:53.885 } 00:26:53.885 ] 00:26:53.885 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.885 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:53.885 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:53.885 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.886 BaseBdev3 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.886 [ 00:26:53.886 { 00:26:53.886 "name": "BaseBdev3", 00:26:53.886 "aliases": [ 00:26:53.886 "4dc88906-ea80-42a0-9473-55cae315feba" 00:26:53.886 ], 00:26:53.886 "product_name": "Malloc disk", 00:26:53.886 "block_size": 512, 00:26:53.886 "num_blocks": 65536, 00:26:53.886 "uuid": "4dc88906-ea80-42a0-9473-55cae315feba", 00:26:53.886 "assigned_rate_limits": { 00:26:53.886 "rw_ios_per_sec": 0, 00:26:53.886 "rw_mbytes_per_sec": 0, 00:26:53.886 "r_mbytes_per_sec": 0, 00:26:53.886 "w_mbytes_per_sec": 0 00:26:53.886 }, 00:26:53.886 "claimed": false, 00:26:53.886 "zoned": false, 00:26:53.886 "supported_io_types": { 00:26:53.886 "read": true, 00:26:53.886 "write": true, 00:26:53.886 "unmap": true, 00:26:53.886 "flush": true, 00:26:53.886 "reset": true, 00:26:53.886 "nvme_admin": false, 00:26:53.886 "nvme_io": false, 00:26:53.886 "nvme_io_md": false, 00:26:53.886 "write_zeroes": true, 00:26:53.886 "zcopy": true, 00:26:53.886 "get_zone_info": false, 00:26:53.886 "zone_management": false, 00:26:53.886 "zone_append": false, 00:26:53.886 "compare": false, 00:26:53.886 "compare_and_write": false, 00:26:53.886 "abort": true, 00:26:53.886 "seek_hole": false, 00:26:53.886 "seek_data": false, 00:26:53.886 "copy": true, 00:26:53.886 "nvme_iov_md": false 00:26:53.886 }, 00:26:53.886 "memory_domains": [ 00:26:53.886 { 00:26:53.886 "dma_device_id": "system", 00:26:53.886 "dma_device_type": 1 00:26:53.886 }, 00:26:53.886 { 00:26:53.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.886 "dma_device_type": 2 00:26:53.886 } 00:26:53.886 ], 00:26:53.886 "driver_specific": {} 00:26:53.886 } 00:26:53.886 ] 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.886 BaseBdev4 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.886 [ 00:26:53.886 { 00:26:53.886 "name": "BaseBdev4", 00:26:53.886 "aliases": [ 00:26:53.886 "babaf534-3eae-46b5-8877-7607fae8a4cc" 00:26:53.886 ], 00:26:53.886 "product_name": "Malloc disk", 00:26:53.886 "block_size": 512, 00:26:53.886 "num_blocks": 65536, 00:26:53.886 "uuid": "babaf534-3eae-46b5-8877-7607fae8a4cc", 00:26:53.886 "assigned_rate_limits": { 00:26:53.886 "rw_ios_per_sec": 0, 00:26:53.886 "rw_mbytes_per_sec": 0, 00:26:53.886 "r_mbytes_per_sec": 0, 00:26:53.886 "w_mbytes_per_sec": 0 00:26:53.886 }, 00:26:53.886 "claimed": false, 00:26:53.886 "zoned": false, 00:26:53.886 "supported_io_types": { 00:26:53.886 "read": true, 00:26:53.886 "write": true, 00:26:53.886 "unmap": true, 00:26:53.886 "flush": true, 00:26:53.886 "reset": true, 00:26:53.886 "nvme_admin": false, 00:26:53.886 "nvme_io": false, 00:26:53.886 "nvme_io_md": false, 00:26:53.886 "write_zeroes": true, 00:26:53.886 "zcopy": true, 00:26:53.886 "get_zone_info": false, 00:26:53.886 "zone_management": false, 00:26:53.886 "zone_append": false, 00:26:53.886 "compare": false, 00:26:53.886 "compare_and_write": false, 00:26:53.886 "abort": true, 00:26:53.886 "seek_hole": false, 00:26:53.886 "seek_data": false, 00:26:53.886 "copy": true, 00:26:53.886 "nvme_iov_md": false 00:26:53.886 }, 00:26:53.886 "memory_domains": [ 00:26:53.886 { 00:26:53.886 "dma_device_id": "system", 00:26:53.886 "dma_device_type": 1 00:26:53.886 }, 00:26:53.886 { 00:26:53.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.886 "dma_device_type": 2 00:26:53.886 } 00:26:53.886 ], 00:26:53.886 "driver_specific": {} 00:26:53.886 } 00:26:53.886 ] 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.886 [2024-10-28 13:38:07.929553] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:53.886 [2024-10-28 13:38:07.929630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:53.886 [2024-10-28 13:38:07.929656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:53.886 [2024-10-28 13:38:07.932338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:53.886 [2024-10-28 13:38:07.932403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.886 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:53.886 "name": "Existed_Raid", 00:26:53.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.886 "strip_size_kb": 64, 00:26:53.886 "state": "configuring", 00:26:53.886 "raid_level": "raid0", 00:26:53.886 "superblock": false, 00:26:53.886 "num_base_bdevs": 4, 00:26:53.886 "num_base_bdevs_discovered": 3, 00:26:53.886 "num_base_bdevs_operational": 4, 00:26:53.886 "base_bdevs_list": [ 00:26:53.886 { 00:26:53.886 "name": "BaseBdev1", 00:26:53.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.887 "is_configured": false, 00:26:53.887 "data_offset": 0, 00:26:53.887 "data_size": 0 00:26:53.887 }, 00:26:53.887 { 00:26:53.887 "name": "BaseBdev2", 00:26:53.887 "uuid": "ef9a6631-f26f-4fd1-9177-b6093fbc9bd4", 00:26:53.887 "is_configured": true, 00:26:53.887 "data_offset": 0, 00:26:53.887 "data_size": 65536 00:26:53.887 }, 00:26:53.887 { 00:26:53.887 "name": "BaseBdev3", 00:26:53.887 "uuid": "4dc88906-ea80-42a0-9473-55cae315feba", 00:26:53.887 "is_configured": true, 00:26:53.887 "data_offset": 0, 00:26:53.887 "data_size": 65536 00:26:53.887 }, 00:26:53.887 { 00:26:53.887 "name": "BaseBdev4", 00:26:53.887 "uuid": "babaf534-3eae-46b5-8877-7607fae8a4cc", 00:26:53.887 "is_configured": true, 00:26:53.887 "data_offset": 0, 00:26:53.887 "data_size": 65536 00:26:53.887 } 00:26:53.887 ] 00:26:53.887 }' 00:26:53.887 13:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:53.887 13:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.454 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:26:54.454 13:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.454 13:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.454 [2024-10-28 13:38:08.477717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:54.455 "name": "Existed_Raid", 00:26:54.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.455 "strip_size_kb": 64, 00:26:54.455 "state": "configuring", 00:26:54.455 "raid_level": "raid0", 00:26:54.455 "superblock": false, 00:26:54.455 "num_base_bdevs": 4, 00:26:54.455 "num_base_bdevs_discovered": 2, 00:26:54.455 "num_base_bdevs_operational": 4, 00:26:54.455 "base_bdevs_list": [ 00:26:54.455 { 00:26:54.455 "name": "BaseBdev1", 00:26:54.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.455 "is_configured": false, 00:26:54.455 "data_offset": 0, 00:26:54.455 "data_size": 0 00:26:54.455 }, 00:26:54.455 { 00:26:54.455 "name": null, 00:26:54.455 "uuid": "ef9a6631-f26f-4fd1-9177-b6093fbc9bd4", 00:26:54.455 "is_configured": false, 00:26:54.455 "data_offset": 0, 00:26:54.455 "data_size": 65536 00:26:54.455 }, 00:26:54.455 { 00:26:54.455 "name": "BaseBdev3", 00:26:54.455 "uuid": "4dc88906-ea80-42a0-9473-55cae315feba", 00:26:54.455 "is_configured": true, 00:26:54.455 "data_offset": 0, 00:26:54.455 "data_size": 65536 00:26:54.455 }, 00:26:54.455 { 00:26:54.455 "name": "BaseBdev4", 00:26:54.455 "uuid": "babaf534-3eae-46b5-8877-7607fae8a4cc", 00:26:54.455 "is_configured": true, 00:26:54.455 "data_offset": 0, 00:26:54.455 "data_size": 65536 00:26:54.455 } 00:26:54.455 ] 00:26:54.455 }' 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:54.455 13:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.022 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.022 13:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.022 13:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:55.022 13:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.022 [2024-10-28 13:38:09.072410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:55.022 BaseBdev1 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.022 [ 00:26:55.022 { 00:26:55.022 "name": "BaseBdev1", 00:26:55.022 "aliases": [ 00:26:55.022 "feedefb4-de1c-46eb-8bac-a96d6b1d8ab1" 00:26:55.022 ], 00:26:55.022 "product_name": "Malloc disk", 00:26:55.022 "block_size": 512, 00:26:55.022 "num_blocks": 65536, 00:26:55.022 "uuid": "feedefb4-de1c-46eb-8bac-a96d6b1d8ab1", 00:26:55.022 "assigned_rate_limits": { 00:26:55.022 "rw_ios_per_sec": 0, 00:26:55.022 "rw_mbytes_per_sec": 0, 00:26:55.022 "r_mbytes_per_sec": 0, 00:26:55.022 "w_mbytes_per_sec": 0 00:26:55.022 }, 00:26:55.022 "claimed": true, 00:26:55.022 "claim_type": "exclusive_write", 00:26:55.022 "zoned": false, 00:26:55.022 "supported_io_types": { 00:26:55.022 "read": true, 00:26:55.022 "write": true, 00:26:55.022 "unmap": true, 00:26:55.022 "flush": true, 00:26:55.022 "reset": true, 00:26:55.022 "nvme_admin": false, 00:26:55.022 "nvme_io": false, 00:26:55.022 "nvme_io_md": false, 00:26:55.022 "write_zeroes": true, 00:26:55.022 "zcopy": true, 00:26:55.022 "get_zone_info": false, 00:26:55.022 "zone_management": false, 00:26:55.022 "zone_append": false, 00:26:55.022 "compare": false, 00:26:55.022 "compare_and_write": false, 00:26:55.022 "abort": true, 00:26:55.022 "seek_hole": false, 00:26:55.022 "seek_data": false, 00:26:55.022 "copy": true, 00:26:55.022 "nvme_iov_md": false 00:26:55.022 }, 00:26:55.022 "memory_domains": [ 00:26:55.022 { 00:26:55.022 "dma_device_id": "system", 00:26:55.022 "dma_device_type": 1 00:26:55.022 }, 00:26:55.022 { 00:26:55.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.022 "dma_device_type": 2 00:26:55.022 } 00:26:55.022 ], 00:26:55.022 "driver_specific": {} 00:26:55.022 } 00:26:55.022 ] 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:55.022 "name": "Existed_Raid", 00:26:55.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.022 "strip_size_kb": 64, 00:26:55.022 "state": "configuring", 00:26:55.022 "raid_level": "raid0", 00:26:55.022 "superblock": false, 00:26:55.022 "num_base_bdevs": 4, 00:26:55.022 "num_base_bdevs_discovered": 3, 00:26:55.022 "num_base_bdevs_operational": 4, 00:26:55.022 "base_bdevs_list": [ 00:26:55.022 { 00:26:55.022 "name": "BaseBdev1", 00:26:55.022 "uuid": "feedefb4-de1c-46eb-8bac-a96d6b1d8ab1", 00:26:55.022 "is_configured": true, 00:26:55.022 "data_offset": 0, 00:26:55.022 "data_size": 65536 00:26:55.022 }, 00:26:55.022 { 00:26:55.022 "name": null, 00:26:55.022 "uuid": "ef9a6631-f26f-4fd1-9177-b6093fbc9bd4", 00:26:55.022 "is_configured": false, 00:26:55.022 "data_offset": 0, 00:26:55.022 "data_size": 65536 00:26:55.022 }, 00:26:55.022 { 00:26:55.022 "name": "BaseBdev3", 00:26:55.022 "uuid": "4dc88906-ea80-42a0-9473-55cae315feba", 00:26:55.022 "is_configured": true, 00:26:55.022 "data_offset": 0, 00:26:55.022 "data_size": 65536 00:26:55.022 }, 00:26:55.022 { 00:26:55.022 "name": "BaseBdev4", 00:26:55.022 "uuid": "babaf534-3eae-46b5-8877-7607fae8a4cc", 00:26:55.022 "is_configured": true, 00:26:55.022 "data_offset": 0, 00:26:55.022 "data_size": 65536 00:26:55.022 } 00:26:55.022 ] 00:26:55.022 }' 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:55.022 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.595 [2024-10-28 13:38:09.684699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:55.595 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.596 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:55.596 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.596 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.596 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.596 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:55.596 "name": "Existed_Raid", 00:26:55.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.596 "strip_size_kb": 64, 00:26:55.596 "state": "configuring", 00:26:55.596 "raid_level": "raid0", 00:26:55.596 "superblock": false, 00:26:55.596 "num_base_bdevs": 4, 00:26:55.596 "num_base_bdevs_discovered": 2, 00:26:55.596 "num_base_bdevs_operational": 4, 00:26:55.596 "base_bdevs_list": [ 00:26:55.596 { 00:26:55.596 "name": "BaseBdev1", 00:26:55.596 "uuid": "feedefb4-de1c-46eb-8bac-a96d6b1d8ab1", 00:26:55.596 "is_configured": true, 00:26:55.596 "data_offset": 0, 00:26:55.596 "data_size": 65536 00:26:55.596 }, 00:26:55.596 { 00:26:55.596 "name": null, 00:26:55.596 "uuid": "ef9a6631-f26f-4fd1-9177-b6093fbc9bd4", 00:26:55.596 "is_configured": false, 00:26:55.596 "data_offset": 0, 00:26:55.596 "data_size": 65536 00:26:55.596 }, 00:26:55.596 { 00:26:55.596 "name": null, 00:26:55.596 "uuid": "4dc88906-ea80-42a0-9473-55cae315feba", 00:26:55.596 "is_configured": false, 00:26:55.596 "data_offset": 0, 00:26:55.596 "data_size": 65536 00:26:55.596 }, 00:26:55.596 { 00:26:55.596 "name": "BaseBdev4", 00:26:55.596 "uuid": "babaf534-3eae-46b5-8877-7607fae8a4cc", 00:26:55.596 "is_configured": true, 00:26:55.596 "data_offset": 0, 00:26:55.596 "data_size": 65536 00:26:55.596 } 00:26:55.596 ] 00:26:55.596 }' 00:26:55.596 13:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:55.596 13:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.164 [2024-10-28 13:38:10.240934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.164 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.165 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.165 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:56.165 "name": "Existed_Raid", 00:26:56.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.165 "strip_size_kb": 64, 00:26:56.165 "state": "configuring", 00:26:56.165 "raid_level": "raid0", 00:26:56.165 "superblock": false, 00:26:56.165 "num_base_bdevs": 4, 00:26:56.165 "num_base_bdevs_discovered": 3, 00:26:56.165 "num_base_bdevs_operational": 4, 00:26:56.165 "base_bdevs_list": [ 00:26:56.165 { 00:26:56.165 "name": "BaseBdev1", 00:26:56.165 "uuid": "feedefb4-de1c-46eb-8bac-a96d6b1d8ab1", 00:26:56.165 "is_configured": true, 00:26:56.165 "data_offset": 0, 00:26:56.165 "data_size": 65536 00:26:56.165 }, 00:26:56.165 { 00:26:56.165 "name": null, 00:26:56.165 "uuid": "ef9a6631-f26f-4fd1-9177-b6093fbc9bd4", 00:26:56.165 "is_configured": false, 00:26:56.165 "data_offset": 0, 00:26:56.165 "data_size": 65536 00:26:56.165 }, 00:26:56.165 { 00:26:56.165 "name": "BaseBdev3", 00:26:56.165 "uuid": "4dc88906-ea80-42a0-9473-55cae315feba", 00:26:56.165 "is_configured": true, 00:26:56.165 "data_offset": 0, 00:26:56.165 "data_size": 65536 00:26:56.165 }, 00:26:56.165 { 00:26:56.165 "name": "BaseBdev4", 00:26:56.165 "uuid": "babaf534-3eae-46b5-8877-7607fae8a4cc", 00:26:56.165 "is_configured": true, 00:26:56.165 "data_offset": 0, 00:26:56.165 "data_size": 65536 00:26:56.165 } 00:26:56.165 ] 00:26:56.165 }' 00:26:56.165 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:56.165 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.732 [2024-10-28 13:38:10.757183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:56.732 "name": "Existed_Raid", 00:26:56.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.732 "strip_size_kb": 64, 00:26:56.732 "state": "configuring", 00:26:56.732 "raid_level": "raid0", 00:26:56.732 "superblock": false, 00:26:56.732 "num_base_bdevs": 4, 00:26:56.732 "num_base_bdevs_discovered": 2, 00:26:56.732 "num_base_bdevs_operational": 4, 00:26:56.732 "base_bdevs_list": [ 00:26:56.732 { 00:26:56.732 "name": null, 00:26:56.732 "uuid": "feedefb4-de1c-46eb-8bac-a96d6b1d8ab1", 00:26:56.732 "is_configured": false, 00:26:56.732 "data_offset": 0, 00:26:56.732 "data_size": 65536 00:26:56.732 }, 00:26:56.732 { 00:26:56.732 "name": null, 00:26:56.732 "uuid": "ef9a6631-f26f-4fd1-9177-b6093fbc9bd4", 00:26:56.732 "is_configured": false, 00:26:56.732 "data_offset": 0, 00:26:56.732 "data_size": 65536 00:26:56.732 }, 00:26:56.732 { 00:26:56.732 "name": "BaseBdev3", 00:26:56.732 "uuid": "4dc88906-ea80-42a0-9473-55cae315feba", 00:26:56.732 "is_configured": true, 00:26:56.732 "data_offset": 0, 00:26:56.732 "data_size": 65536 00:26:56.732 }, 00:26:56.732 { 00:26:56.732 "name": "BaseBdev4", 00:26:56.732 "uuid": "babaf534-3eae-46b5-8877-7607fae8a4cc", 00:26:56.732 "is_configured": true, 00:26:56.732 "data_offset": 0, 00:26:56.732 "data_size": 65536 00:26:56.732 } 00:26:56.732 ] 00:26:56.732 }' 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:56.732 13:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.299 [2024-10-28 13:38:11.313080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.299 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:57.299 "name": "Existed_Raid", 00:26:57.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.299 "strip_size_kb": 64, 00:26:57.299 "state": "configuring", 00:26:57.299 "raid_level": "raid0", 00:26:57.299 "superblock": false, 00:26:57.299 "num_base_bdevs": 4, 00:26:57.299 "num_base_bdevs_discovered": 3, 00:26:57.299 "num_base_bdevs_operational": 4, 00:26:57.299 "base_bdevs_list": [ 00:26:57.299 { 00:26:57.299 "name": null, 00:26:57.299 "uuid": "feedefb4-de1c-46eb-8bac-a96d6b1d8ab1", 00:26:57.299 "is_configured": false, 00:26:57.299 "data_offset": 0, 00:26:57.299 "data_size": 65536 00:26:57.299 }, 00:26:57.299 { 00:26:57.299 "name": "BaseBdev2", 00:26:57.299 "uuid": "ef9a6631-f26f-4fd1-9177-b6093fbc9bd4", 00:26:57.299 "is_configured": true, 00:26:57.299 "data_offset": 0, 00:26:57.299 "data_size": 65536 00:26:57.299 }, 00:26:57.300 { 00:26:57.300 "name": "BaseBdev3", 00:26:57.300 "uuid": "4dc88906-ea80-42a0-9473-55cae315feba", 00:26:57.300 "is_configured": true, 00:26:57.300 "data_offset": 0, 00:26:57.300 "data_size": 65536 00:26:57.300 }, 00:26:57.300 { 00:26:57.300 "name": "BaseBdev4", 00:26:57.300 "uuid": "babaf534-3eae-46b5-8877-7607fae8a4cc", 00:26:57.300 "is_configured": true, 00:26:57.300 "data_offset": 0, 00:26:57.300 "data_size": 65536 00:26:57.300 } 00:26:57.300 ] 00:26:57.300 }' 00:26:57.300 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:57.300 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u feedefb4-de1c-46eb-8bac-a96d6b1d8ab1 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.867 [2024-10-28 13:38:11.995450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:57.867 [2024-10-28 13:38:11.995520] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:57.867 [2024-10-28 13:38:11.995566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:26:57.867 [2024-10-28 13:38:11.995907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:26:57.867 [2024-10-28 13:38:11.996106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:57.867 [2024-10-28 13:38:11.996133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:57.867 [2024-10-28 13:38:11.996460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:57.867 NewBaseBdev 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:57.867 13:38:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:26:57.867 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.867 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.867 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.867 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:57.867 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.867 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:57.867 [ 00:26:57.867 { 00:26:57.867 "name": "NewBaseBdev", 00:26:57.867 "aliases": [ 00:26:57.867 "feedefb4-de1c-46eb-8bac-a96d6b1d8ab1" 00:26:57.867 ], 00:26:57.867 "product_name": "Malloc disk", 00:26:57.867 "block_size": 512, 00:26:57.867 "num_blocks": 65536, 00:26:57.867 "uuid": "feedefb4-de1c-46eb-8bac-a96d6b1d8ab1", 00:26:57.867 "assigned_rate_limits": { 00:26:57.867 "rw_ios_per_sec": 0, 00:26:57.867 "rw_mbytes_per_sec": 0, 00:26:57.867 "r_mbytes_per_sec": 0, 00:26:57.867 "w_mbytes_per_sec": 0 00:26:57.867 }, 00:26:57.867 "claimed": true, 00:26:57.867 "claim_type": "exclusive_write", 00:26:57.867 "zoned": false, 00:26:57.867 "supported_io_types": { 00:26:57.867 "read": true, 00:26:57.867 "write": true, 00:26:57.867 "unmap": true, 00:26:57.867 "flush": true, 00:26:57.867 "reset": true, 00:26:57.867 "nvme_admin": false, 00:26:57.867 "nvme_io": false, 00:26:57.867 "nvme_io_md": false, 00:26:57.867 "write_zeroes": true, 00:26:57.867 "zcopy": true, 00:26:57.867 "get_zone_info": false, 00:26:57.867 "zone_management": false, 00:26:57.867 "zone_append": false, 00:26:57.867 "compare": false, 00:26:57.867 "compare_and_write": false, 00:26:57.867 "abort": true, 00:26:57.867 "seek_hole": false, 00:26:57.867 "seek_data": false, 00:26:57.867 "copy": true, 00:26:57.867 "nvme_iov_md": false 00:26:57.867 }, 00:26:57.867 "memory_domains": [ 00:26:57.867 { 00:26:57.867 "dma_device_id": "system", 00:26:57.867 "dma_device_type": 1 00:26:57.867 }, 00:26:57.867 { 00:26:57.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.867 "dma_device_type": 2 00:26:58.126 } 00:26:58.126 ], 00:26:58.126 "driver_specific": {} 00:26:58.126 } 00:26:58.126 ] 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:58.126 "name": "Existed_Raid", 00:26:58.126 "uuid": "0da80f16-c0d7-4e7c-b9ef-f78b4f4553c9", 00:26:58.126 "strip_size_kb": 64, 00:26:58.126 "state": "online", 00:26:58.126 "raid_level": "raid0", 00:26:58.126 "superblock": false, 00:26:58.126 "num_base_bdevs": 4, 00:26:58.126 "num_base_bdevs_discovered": 4, 00:26:58.126 "num_base_bdevs_operational": 4, 00:26:58.126 "base_bdevs_list": [ 00:26:58.126 { 00:26:58.126 "name": "NewBaseBdev", 00:26:58.126 "uuid": "feedefb4-de1c-46eb-8bac-a96d6b1d8ab1", 00:26:58.126 "is_configured": true, 00:26:58.126 "data_offset": 0, 00:26:58.126 "data_size": 65536 00:26:58.126 }, 00:26:58.126 { 00:26:58.126 "name": "BaseBdev2", 00:26:58.126 "uuid": "ef9a6631-f26f-4fd1-9177-b6093fbc9bd4", 00:26:58.126 "is_configured": true, 00:26:58.126 "data_offset": 0, 00:26:58.126 "data_size": 65536 00:26:58.126 }, 00:26:58.126 { 00:26:58.126 "name": "BaseBdev3", 00:26:58.126 "uuid": "4dc88906-ea80-42a0-9473-55cae315feba", 00:26:58.126 "is_configured": true, 00:26:58.126 "data_offset": 0, 00:26:58.126 "data_size": 65536 00:26:58.126 }, 00:26:58.126 { 00:26:58.126 "name": "BaseBdev4", 00:26:58.126 "uuid": "babaf534-3eae-46b5-8877-7607fae8a4cc", 00:26:58.126 "is_configured": true, 00:26:58.126 "data_offset": 0, 00:26:58.126 "data_size": 65536 00:26:58.126 } 00:26:58.126 ] 00:26:58.126 }' 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:58.126 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.692 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:26:58.692 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:58.692 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:58.692 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:58.692 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:58.692 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:58.692 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:58.692 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.692 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:58.692 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.692 [2024-10-28 13:38:12.580162] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:58.692 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.692 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:58.692 "name": "Existed_Raid", 00:26:58.692 "aliases": [ 00:26:58.692 "0da80f16-c0d7-4e7c-b9ef-f78b4f4553c9" 00:26:58.692 ], 00:26:58.692 "product_name": "Raid Volume", 00:26:58.692 "block_size": 512, 00:26:58.692 "num_blocks": 262144, 00:26:58.692 "uuid": "0da80f16-c0d7-4e7c-b9ef-f78b4f4553c9", 00:26:58.692 "assigned_rate_limits": { 00:26:58.692 "rw_ios_per_sec": 0, 00:26:58.692 "rw_mbytes_per_sec": 0, 00:26:58.692 "r_mbytes_per_sec": 0, 00:26:58.692 "w_mbytes_per_sec": 0 00:26:58.692 }, 00:26:58.692 "claimed": false, 00:26:58.692 "zoned": false, 00:26:58.692 "supported_io_types": { 00:26:58.692 "read": true, 00:26:58.693 "write": true, 00:26:58.693 "unmap": true, 00:26:58.693 "flush": true, 00:26:58.693 "reset": true, 00:26:58.693 "nvme_admin": false, 00:26:58.693 "nvme_io": false, 00:26:58.693 "nvme_io_md": false, 00:26:58.693 "write_zeroes": true, 00:26:58.693 "zcopy": false, 00:26:58.693 "get_zone_info": false, 00:26:58.693 "zone_management": false, 00:26:58.693 "zone_append": false, 00:26:58.693 "compare": false, 00:26:58.693 "compare_and_write": false, 00:26:58.693 "abort": false, 00:26:58.693 "seek_hole": false, 00:26:58.693 "seek_data": false, 00:26:58.693 "copy": false, 00:26:58.693 "nvme_iov_md": false 00:26:58.693 }, 00:26:58.693 "memory_domains": [ 00:26:58.693 { 00:26:58.693 "dma_device_id": "system", 00:26:58.693 "dma_device_type": 1 00:26:58.693 }, 00:26:58.693 { 00:26:58.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:58.693 "dma_device_type": 2 00:26:58.693 }, 00:26:58.693 { 00:26:58.693 "dma_device_id": "system", 00:26:58.693 "dma_device_type": 1 00:26:58.693 }, 00:26:58.693 { 00:26:58.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:58.693 "dma_device_type": 2 00:26:58.693 }, 00:26:58.693 { 00:26:58.693 "dma_device_id": "system", 00:26:58.693 "dma_device_type": 1 00:26:58.693 }, 00:26:58.693 { 00:26:58.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:58.693 "dma_device_type": 2 00:26:58.693 }, 00:26:58.693 { 00:26:58.693 "dma_device_id": "system", 00:26:58.693 "dma_device_type": 1 00:26:58.693 }, 00:26:58.693 { 00:26:58.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:58.693 "dma_device_type": 2 00:26:58.693 } 00:26:58.693 ], 00:26:58.693 "driver_specific": { 00:26:58.693 "raid": { 00:26:58.693 "uuid": "0da80f16-c0d7-4e7c-b9ef-f78b4f4553c9", 00:26:58.693 "strip_size_kb": 64, 00:26:58.693 "state": "online", 00:26:58.693 "raid_level": "raid0", 00:26:58.693 "superblock": false, 00:26:58.693 "num_base_bdevs": 4, 00:26:58.693 "num_base_bdevs_discovered": 4, 00:26:58.693 "num_base_bdevs_operational": 4, 00:26:58.693 "base_bdevs_list": [ 00:26:58.693 { 00:26:58.693 "name": "NewBaseBdev", 00:26:58.693 "uuid": "feedefb4-de1c-46eb-8bac-a96d6b1d8ab1", 00:26:58.693 "is_configured": true, 00:26:58.693 "data_offset": 0, 00:26:58.693 "data_size": 65536 00:26:58.693 }, 00:26:58.693 { 00:26:58.693 "name": "BaseBdev2", 00:26:58.693 "uuid": "ef9a6631-f26f-4fd1-9177-b6093fbc9bd4", 00:26:58.693 "is_configured": true, 00:26:58.693 "data_offset": 0, 00:26:58.693 "data_size": 65536 00:26:58.693 }, 00:26:58.693 { 00:26:58.693 "name": "BaseBdev3", 00:26:58.693 "uuid": "4dc88906-ea80-42a0-9473-55cae315feba", 00:26:58.693 "is_configured": true, 00:26:58.693 "data_offset": 0, 00:26:58.693 "data_size": 65536 00:26:58.693 }, 00:26:58.693 { 00:26:58.693 "name": "BaseBdev4", 00:26:58.693 "uuid": "babaf534-3eae-46b5-8877-7607fae8a4cc", 00:26:58.693 "is_configured": true, 00:26:58.693 "data_offset": 0, 00:26:58.693 "data_size": 65536 00:26:58.693 } 00:26:58.693 ] 00:26:58.693 } 00:26:58.693 } 00:26:58.693 }' 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:26:58.693 BaseBdev2 00:26:58.693 BaseBdev3 00:26:58.693 BaseBdev4' 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.693 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:58.962 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.962 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:58.962 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:58.962 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:58.962 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:26:58.962 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.962 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.963 [2024-10-28 13:38:12.943796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:58.963 [2024-10-28 13:38:12.943858] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:58.963 [2024-10-28 13:38:12.944030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:58.963 [2024-10-28 13:38:12.944123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:58.963 [2024-10-28 13:38:12.944169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82145 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82145 ']' 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82145 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82145 00:26:58.963 killing process with pid 82145 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82145' 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82145 00:26:58.963 [2024-10-28 13:38:12.983583] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:58.963 13:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82145 00:26:58.963 [2024-10-28 13:38:13.059856] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:59.241 13:38:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:26:59.241 00:26:59.241 real 0m11.451s 00:26:59.241 user 0m19.928s 00:26:59.241 sys 0m1.861s 00:26:59.241 13:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:59.241 ************************************ 00:26:59.241 END TEST raid_state_function_test 00:26:59.241 ************************************ 00:26:59.241 13:38:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.499 13:38:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:26:59.499 13:38:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:59.499 13:38:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:59.499 13:38:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:59.499 ************************************ 00:26:59.499 START TEST raid_state_function_test_sb 00:26:59.499 ************************************ 00:26:59.499 13:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:26:59.499 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:26:59.499 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:26:59.499 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:26:59.499 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:59.499 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:59.499 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:59.499 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:59.499 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:59.499 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:59.499 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82829 00:26:59.500 Process raid pid: 82829 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82829' 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82829 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82829 ']' 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.500 13:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.500 [2024-10-28 13:38:13.570317] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:26:59.500 [2024-10-28 13:38:13.570504] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.759 [2024-10-28 13:38:13.721865] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:59.759 [2024-10-28 13:38:13.748359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.759 [2024-10-28 13:38:13.824757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.759 [2024-10-28 13:38:13.912057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:59.759 [2024-10-28 13:38:13.912112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.695 [2024-10-28 13:38:14.610266] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:00.695 [2024-10-28 13:38:14.610358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:00.695 [2024-10-28 13:38:14.610386] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:00.695 [2024-10-28 13:38:14.610402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:00.695 [2024-10-28 13:38:14.610420] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:00.695 [2024-10-28 13:38:14.610432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:00.695 [2024-10-28 13:38:14.610445] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:00.695 [2024-10-28 13:38:14.610457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:00.695 "name": "Existed_Raid", 00:27:00.695 "uuid": "067c25ae-49ad-4670-8263-3e6253965566", 00:27:00.695 "strip_size_kb": 64, 00:27:00.695 "state": "configuring", 00:27:00.695 "raid_level": "raid0", 00:27:00.695 "superblock": true, 00:27:00.695 "num_base_bdevs": 4, 00:27:00.695 "num_base_bdevs_discovered": 0, 00:27:00.695 "num_base_bdevs_operational": 4, 00:27:00.695 "base_bdevs_list": [ 00:27:00.695 { 00:27:00.695 "name": "BaseBdev1", 00:27:00.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.695 "is_configured": false, 00:27:00.695 "data_offset": 0, 00:27:00.695 "data_size": 0 00:27:00.695 }, 00:27:00.695 { 00:27:00.695 "name": "BaseBdev2", 00:27:00.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.695 "is_configured": false, 00:27:00.695 "data_offset": 0, 00:27:00.695 "data_size": 0 00:27:00.695 }, 00:27:00.695 { 00:27:00.695 "name": "BaseBdev3", 00:27:00.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.695 "is_configured": false, 00:27:00.695 "data_offset": 0, 00:27:00.695 "data_size": 0 00:27:00.695 }, 00:27:00.695 { 00:27:00.695 "name": "BaseBdev4", 00:27:00.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.695 "is_configured": false, 00:27:00.695 "data_offset": 0, 00:27:00.695 "data_size": 0 00:27:00.695 } 00:27:00.695 ] 00:27:00.695 }' 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:00.695 13:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.954 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:00.954 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.954 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.954 [2024-10-28 13:38:15.102235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:00.954 [2024-10-28 13:38:15.102294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:27:00.954 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.954 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:00.954 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.954 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.212 [2024-10-28 13:38:15.114342] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:01.212 [2024-10-28 13:38:15.114579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:01.212 [2024-10-28 13:38:15.114746] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:01.212 [2024-10-28 13:38:15.114931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:01.212 [2024-10-28 13:38:15.115079] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:01.212 [2024-10-28 13:38:15.115257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:01.212 [2024-10-28 13:38:15.115465] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:01.212 [2024-10-28 13:38:15.115660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:01.212 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.212 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:01.212 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.212 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.213 [2024-10-28 13:38:15.139802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:01.213 BaseBdev1 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.213 [ 00:27:01.213 { 00:27:01.213 "name": "BaseBdev1", 00:27:01.213 "aliases": [ 00:27:01.213 "b2a07364-c55b-45f5-86d2-b3816545c9cf" 00:27:01.213 ], 00:27:01.213 "product_name": "Malloc disk", 00:27:01.213 "block_size": 512, 00:27:01.213 "num_blocks": 65536, 00:27:01.213 "uuid": "b2a07364-c55b-45f5-86d2-b3816545c9cf", 00:27:01.213 "assigned_rate_limits": { 00:27:01.213 "rw_ios_per_sec": 0, 00:27:01.213 "rw_mbytes_per_sec": 0, 00:27:01.213 "r_mbytes_per_sec": 0, 00:27:01.213 "w_mbytes_per_sec": 0 00:27:01.213 }, 00:27:01.213 "claimed": true, 00:27:01.213 "claim_type": "exclusive_write", 00:27:01.213 "zoned": false, 00:27:01.213 "supported_io_types": { 00:27:01.213 "read": true, 00:27:01.213 "write": true, 00:27:01.213 "unmap": true, 00:27:01.213 "flush": true, 00:27:01.213 "reset": true, 00:27:01.213 "nvme_admin": false, 00:27:01.213 "nvme_io": false, 00:27:01.213 "nvme_io_md": false, 00:27:01.213 "write_zeroes": true, 00:27:01.213 "zcopy": true, 00:27:01.213 "get_zone_info": false, 00:27:01.213 "zone_management": false, 00:27:01.213 "zone_append": false, 00:27:01.213 "compare": false, 00:27:01.213 "compare_and_write": false, 00:27:01.213 "abort": true, 00:27:01.213 "seek_hole": false, 00:27:01.213 "seek_data": false, 00:27:01.213 "copy": true, 00:27:01.213 "nvme_iov_md": false 00:27:01.213 }, 00:27:01.213 "memory_domains": [ 00:27:01.213 { 00:27:01.213 "dma_device_id": "system", 00:27:01.213 "dma_device_type": 1 00:27:01.213 }, 00:27:01.213 { 00:27:01.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:01.213 "dma_device_type": 2 00:27:01.213 } 00:27:01.213 ], 00:27:01.213 "driver_specific": {} 00:27:01.213 } 00:27:01.213 ] 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:01.213 "name": "Existed_Raid", 00:27:01.213 "uuid": "2eb50e20-c4b6-4e08-99f2-57a2e02a23e0", 00:27:01.213 "strip_size_kb": 64, 00:27:01.213 "state": "configuring", 00:27:01.213 "raid_level": "raid0", 00:27:01.213 "superblock": true, 00:27:01.213 "num_base_bdevs": 4, 00:27:01.213 "num_base_bdevs_discovered": 1, 00:27:01.213 "num_base_bdevs_operational": 4, 00:27:01.213 "base_bdevs_list": [ 00:27:01.213 { 00:27:01.213 "name": "BaseBdev1", 00:27:01.213 "uuid": "b2a07364-c55b-45f5-86d2-b3816545c9cf", 00:27:01.213 "is_configured": true, 00:27:01.213 "data_offset": 2048, 00:27:01.213 "data_size": 63488 00:27:01.213 }, 00:27:01.213 { 00:27:01.213 "name": "BaseBdev2", 00:27:01.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.213 "is_configured": false, 00:27:01.213 "data_offset": 0, 00:27:01.213 "data_size": 0 00:27:01.213 }, 00:27:01.213 { 00:27:01.213 "name": "BaseBdev3", 00:27:01.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.213 "is_configured": false, 00:27:01.213 "data_offset": 0, 00:27:01.213 "data_size": 0 00:27:01.213 }, 00:27:01.213 { 00:27:01.213 "name": "BaseBdev4", 00:27:01.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.213 "is_configured": false, 00:27:01.213 "data_offset": 0, 00:27:01.213 "data_size": 0 00:27:01.213 } 00:27:01.213 ] 00:27:01.213 }' 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:01.213 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.779 [2024-10-28 13:38:15.680163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:01.779 [2024-10-28 13:38:15.680316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.779 [2024-10-28 13:38:15.688171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:01.779 [2024-10-28 13:38:15.691739] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:01.779 [2024-10-28 13:38:15.691809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:01.779 [2024-10-28 13:38:15.691850] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:01.779 [2024-10-28 13:38:15.691869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:01.779 [2024-10-28 13:38:15.691889] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:01.779 [2024-10-28 13:38:15.691907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:01.779 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:01.780 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:01.780 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.780 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:01.780 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.780 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.780 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:01.780 "name": "Existed_Raid", 00:27:01.780 "uuid": "625b83a2-d1e8-4307-b630-e6572bc0e124", 00:27:01.780 "strip_size_kb": 64, 00:27:01.780 "state": "configuring", 00:27:01.780 "raid_level": "raid0", 00:27:01.780 "superblock": true, 00:27:01.780 "num_base_bdevs": 4, 00:27:01.780 "num_base_bdevs_discovered": 1, 00:27:01.780 "num_base_bdevs_operational": 4, 00:27:01.780 "base_bdevs_list": [ 00:27:01.780 { 00:27:01.780 "name": "BaseBdev1", 00:27:01.780 "uuid": "b2a07364-c55b-45f5-86d2-b3816545c9cf", 00:27:01.780 "is_configured": true, 00:27:01.780 "data_offset": 2048, 00:27:01.780 "data_size": 63488 00:27:01.780 }, 00:27:01.780 { 00:27:01.780 "name": "BaseBdev2", 00:27:01.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.780 "is_configured": false, 00:27:01.780 "data_offset": 0, 00:27:01.780 "data_size": 0 00:27:01.780 }, 00:27:01.780 { 00:27:01.780 "name": "BaseBdev3", 00:27:01.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.780 "is_configured": false, 00:27:01.780 "data_offset": 0, 00:27:01.780 "data_size": 0 00:27:01.780 }, 00:27:01.780 { 00:27:01.780 "name": "BaseBdev4", 00:27:01.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.780 "is_configured": false, 00:27:01.780 "data_offset": 0, 00:27:01.780 "data_size": 0 00:27:01.780 } 00:27:01.780 ] 00:27:01.780 }' 00:27:01.780 13:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:01.780 13:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.348 [2024-10-28 13:38:16.266792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:02.348 BaseBdev2 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.348 [ 00:27:02.348 { 00:27:02.348 "name": "BaseBdev2", 00:27:02.348 "aliases": [ 00:27:02.348 "ed1cec01-3081-4e92-95b2-09c2dc87676d" 00:27:02.348 ], 00:27:02.348 "product_name": "Malloc disk", 00:27:02.348 "block_size": 512, 00:27:02.348 "num_blocks": 65536, 00:27:02.348 "uuid": "ed1cec01-3081-4e92-95b2-09c2dc87676d", 00:27:02.348 "assigned_rate_limits": { 00:27:02.348 "rw_ios_per_sec": 0, 00:27:02.348 "rw_mbytes_per_sec": 0, 00:27:02.348 "r_mbytes_per_sec": 0, 00:27:02.348 "w_mbytes_per_sec": 0 00:27:02.348 }, 00:27:02.348 "claimed": true, 00:27:02.348 "claim_type": "exclusive_write", 00:27:02.348 "zoned": false, 00:27:02.348 "supported_io_types": { 00:27:02.348 "read": true, 00:27:02.348 "write": true, 00:27:02.348 "unmap": true, 00:27:02.348 "flush": true, 00:27:02.348 "reset": true, 00:27:02.348 "nvme_admin": false, 00:27:02.348 "nvme_io": false, 00:27:02.348 "nvme_io_md": false, 00:27:02.348 "write_zeroes": true, 00:27:02.348 "zcopy": true, 00:27:02.348 "get_zone_info": false, 00:27:02.348 "zone_management": false, 00:27:02.348 "zone_append": false, 00:27:02.348 "compare": false, 00:27:02.348 "compare_and_write": false, 00:27:02.348 "abort": true, 00:27:02.348 "seek_hole": false, 00:27:02.348 "seek_data": false, 00:27:02.348 "copy": true, 00:27:02.348 "nvme_iov_md": false 00:27:02.348 }, 00:27:02.348 "memory_domains": [ 00:27:02.348 { 00:27:02.348 "dma_device_id": "system", 00:27:02.348 "dma_device_type": 1 00:27:02.348 }, 00:27:02.348 { 00:27:02.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.348 "dma_device_type": 2 00:27:02.348 } 00:27:02.348 ], 00:27:02.348 "driver_specific": {} 00:27:02.348 } 00:27:02.348 ] 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.348 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:02.348 "name": "Existed_Raid", 00:27:02.348 "uuid": "625b83a2-d1e8-4307-b630-e6572bc0e124", 00:27:02.348 "strip_size_kb": 64, 00:27:02.348 "state": "configuring", 00:27:02.348 "raid_level": "raid0", 00:27:02.348 "superblock": true, 00:27:02.348 "num_base_bdevs": 4, 00:27:02.348 "num_base_bdevs_discovered": 2, 00:27:02.348 "num_base_bdevs_operational": 4, 00:27:02.348 "base_bdevs_list": [ 00:27:02.348 { 00:27:02.348 "name": "BaseBdev1", 00:27:02.348 "uuid": "b2a07364-c55b-45f5-86d2-b3816545c9cf", 00:27:02.348 "is_configured": true, 00:27:02.348 "data_offset": 2048, 00:27:02.348 "data_size": 63488 00:27:02.348 }, 00:27:02.348 { 00:27:02.348 "name": "BaseBdev2", 00:27:02.348 "uuid": "ed1cec01-3081-4e92-95b2-09c2dc87676d", 00:27:02.348 "is_configured": true, 00:27:02.348 "data_offset": 2048, 00:27:02.348 "data_size": 63488 00:27:02.348 }, 00:27:02.348 { 00:27:02.348 "name": "BaseBdev3", 00:27:02.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.348 "is_configured": false, 00:27:02.348 "data_offset": 0, 00:27:02.348 "data_size": 0 00:27:02.348 }, 00:27:02.348 { 00:27:02.348 "name": "BaseBdev4", 00:27:02.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.348 "is_configured": false, 00:27:02.348 "data_offset": 0, 00:27:02.349 "data_size": 0 00:27:02.349 } 00:27:02.349 ] 00:27:02.349 }' 00:27:02.349 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:02.349 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.917 [2024-10-28 13:38:16.845911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:02.917 BaseBdev3 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.917 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.917 [ 00:27:02.917 { 00:27:02.917 "name": "BaseBdev3", 00:27:02.917 "aliases": [ 00:27:02.917 "b13aa00b-4c57-4d5a-8647-6ceecc7c8a8b" 00:27:02.917 ], 00:27:02.917 "product_name": "Malloc disk", 00:27:02.917 "block_size": 512, 00:27:02.917 "num_blocks": 65536, 00:27:02.917 "uuid": "b13aa00b-4c57-4d5a-8647-6ceecc7c8a8b", 00:27:02.917 "assigned_rate_limits": { 00:27:02.917 "rw_ios_per_sec": 0, 00:27:02.917 "rw_mbytes_per_sec": 0, 00:27:02.917 "r_mbytes_per_sec": 0, 00:27:02.917 "w_mbytes_per_sec": 0 00:27:02.917 }, 00:27:02.917 "claimed": true, 00:27:02.917 "claim_type": "exclusive_write", 00:27:02.917 "zoned": false, 00:27:02.917 "supported_io_types": { 00:27:02.917 "read": true, 00:27:02.918 "write": true, 00:27:02.918 "unmap": true, 00:27:02.918 "flush": true, 00:27:02.918 "reset": true, 00:27:02.918 "nvme_admin": false, 00:27:02.918 "nvme_io": false, 00:27:02.918 "nvme_io_md": false, 00:27:02.918 "write_zeroes": true, 00:27:02.918 "zcopy": true, 00:27:02.918 "get_zone_info": false, 00:27:02.918 "zone_management": false, 00:27:02.918 "zone_append": false, 00:27:02.918 "compare": false, 00:27:02.918 "compare_and_write": false, 00:27:02.918 "abort": true, 00:27:02.918 "seek_hole": false, 00:27:02.918 "seek_data": false, 00:27:02.918 "copy": true, 00:27:02.918 "nvme_iov_md": false 00:27:02.918 }, 00:27:02.918 "memory_domains": [ 00:27:02.918 { 00:27:02.918 "dma_device_id": "system", 00:27:02.918 "dma_device_type": 1 00:27:02.918 }, 00:27:02.918 { 00:27:02.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.918 "dma_device_type": 2 00:27:02.918 } 00:27:02.918 ], 00:27:02.918 "driver_specific": {} 00:27:02.918 } 00:27:02.918 ] 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:02.918 "name": "Existed_Raid", 00:27:02.918 "uuid": "625b83a2-d1e8-4307-b630-e6572bc0e124", 00:27:02.918 "strip_size_kb": 64, 00:27:02.918 "state": "configuring", 00:27:02.918 "raid_level": "raid0", 00:27:02.918 "superblock": true, 00:27:02.918 "num_base_bdevs": 4, 00:27:02.918 "num_base_bdevs_discovered": 3, 00:27:02.918 "num_base_bdevs_operational": 4, 00:27:02.918 "base_bdevs_list": [ 00:27:02.918 { 00:27:02.918 "name": "BaseBdev1", 00:27:02.918 "uuid": "b2a07364-c55b-45f5-86d2-b3816545c9cf", 00:27:02.918 "is_configured": true, 00:27:02.918 "data_offset": 2048, 00:27:02.918 "data_size": 63488 00:27:02.918 }, 00:27:02.918 { 00:27:02.918 "name": "BaseBdev2", 00:27:02.918 "uuid": "ed1cec01-3081-4e92-95b2-09c2dc87676d", 00:27:02.918 "is_configured": true, 00:27:02.918 "data_offset": 2048, 00:27:02.918 "data_size": 63488 00:27:02.918 }, 00:27:02.918 { 00:27:02.918 "name": "BaseBdev3", 00:27:02.918 "uuid": "b13aa00b-4c57-4d5a-8647-6ceecc7c8a8b", 00:27:02.918 "is_configured": true, 00:27:02.918 "data_offset": 2048, 00:27:02.918 "data_size": 63488 00:27:02.918 }, 00:27:02.918 { 00:27:02.918 "name": "BaseBdev4", 00:27:02.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.918 "is_configured": false, 00:27:02.918 "data_offset": 0, 00:27:02.918 "data_size": 0 00:27:02.918 } 00:27:02.918 ] 00:27:02.918 }' 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:02.918 13:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.486 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:03.486 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.486 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.486 [2024-10-28 13:38:17.440343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:03.486 [2024-10-28 13:38:17.440707] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:03.486 [2024-10-28 13:38:17.440742] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:03.486 BaseBdev4 00:27:03.486 [2024-10-28 13:38:17.441068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:27:03.487 [2024-10-28 13:38:17.441331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:03.487 [2024-10-28 13:38:17.441351] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:27:03.487 [2024-10-28 13:38:17.441533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.487 [ 00:27:03.487 { 00:27:03.487 "name": "BaseBdev4", 00:27:03.487 "aliases": [ 00:27:03.487 "39d267a6-d566-4784-ba02-91a22dd59e0f" 00:27:03.487 ], 00:27:03.487 "product_name": "Malloc disk", 00:27:03.487 "block_size": 512, 00:27:03.487 "num_blocks": 65536, 00:27:03.487 "uuid": "39d267a6-d566-4784-ba02-91a22dd59e0f", 00:27:03.487 "assigned_rate_limits": { 00:27:03.487 "rw_ios_per_sec": 0, 00:27:03.487 "rw_mbytes_per_sec": 0, 00:27:03.487 "r_mbytes_per_sec": 0, 00:27:03.487 "w_mbytes_per_sec": 0 00:27:03.487 }, 00:27:03.487 "claimed": true, 00:27:03.487 "claim_type": "exclusive_write", 00:27:03.487 "zoned": false, 00:27:03.487 "supported_io_types": { 00:27:03.487 "read": true, 00:27:03.487 "write": true, 00:27:03.487 "unmap": true, 00:27:03.487 "flush": true, 00:27:03.487 "reset": true, 00:27:03.487 "nvme_admin": false, 00:27:03.487 "nvme_io": false, 00:27:03.487 "nvme_io_md": false, 00:27:03.487 "write_zeroes": true, 00:27:03.487 "zcopy": true, 00:27:03.487 "get_zone_info": false, 00:27:03.487 "zone_management": false, 00:27:03.487 "zone_append": false, 00:27:03.487 "compare": false, 00:27:03.487 "compare_and_write": false, 00:27:03.487 "abort": true, 00:27:03.487 "seek_hole": false, 00:27:03.487 "seek_data": false, 00:27:03.487 "copy": true, 00:27:03.487 "nvme_iov_md": false 00:27:03.487 }, 00:27:03.487 "memory_domains": [ 00:27:03.487 { 00:27:03.487 "dma_device_id": "system", 00:27:03.487 "dma_device_type": 1 00:27:03.487 }, 00:27:03.487 { 00:27:03.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.487 "dma_device_type": 2 00:27:03.487 } 00:27:03.487 ], 00:27:03.487 "driver_specific": {} 00:27:03.487 } 00:27:03.487 ] 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:03.487 "name": "Existed_Raid", 00:27:03.487 "uuid": "625b83a2-d1e8-4307-b630-e6572bc0e124", 00:27:03.487 "strip_size_kb": 64, 00:27:03.487 "state": "online", 00:27:03.487 "raid_level": "raid0", 00:27:03.487 "superblock": true, 00:27:03.487 "num_base_bdevs": 4, 00:27:03.487 "num_base_bdevs_discovered": 4, 00:27:03.487 "num_base_bdevs_operational": 4, 00:27:03.487 "base_bdevs_list": [ 00:27:03.487 { 00:27:03.487 "name": "BaseBdev1", 00:27:03.487 "uuid": "b2a07364-c55b-45f5-86d2-b3816545c9cf", 00:27:03.487 "is_configured": true, 00:27:03.487 "data_offset": 2048, 00:27:03.487 "data_size": 63488 00:27:03.487 }, 00:27:03.487 { 00:27:03.487 "name": "BaseBdev2", 00:27:03.487 "uuid": "ed1cec01-3081-4e92-95b2-09c2dc87676d", 00:27:03.487 "is_configured": true, 00:27:03.487 "data_offset": 2048, 00:27:03.487 "data_size": 63488 00:27:03.487 }, 00:27:03.487 { 00:27:03.487 "name": "BaseBdev3", 00:27:03.487 "uuid": "b13aa00b-4c57-4d5a-8647-6ceecc7c8a8b", 00:27:03.487 "is_configured": true, 00:27:03.487 "data_offset": 2048, 00:27:03.487 "data_size": 63488 00:27:03.487 }, 00:27:03.487 { 00:27:03.487 "name": "BaseBdev4", 00:27:03.487 "uuid": "39d267a6-d566-4784-ba02-91a22dd59e0f", 00:27:03.487 "is_configured": true, 00:27:03.487 "data_offset": 2048, 00:27:03.487 "data_size": 63488 00:27:03.487 } 00:27:03.487 ] 00:27:03.487 }' 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:03.487 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.055 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:04.055 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:04.055 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:04.055 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:04.055 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:27:04.055 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:04.055 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:04.055 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.055 13:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:04.055 13:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.055 [2024-10-28 13:38:18.005046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:04.055 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.055 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:04.055 "name": "Existed_Raid", 00:27:04.055 "aliases": [ 00:27:04.055 "625b83a2-d1e8-4307-b630-e6572bc0e124" 00:27:04.055 ], 00:27:04.055 "product_name": "Raid Volume", 00:27:04.055 "block_size": 512, 00:27:04.055 "num_blocks": 253952, 00:27:04.055 "uuid": "625b83a2-d1e8-4307-b630-e6572bc0e124", 00:27:04.055 "assigned_rate_limits": { 00:27:04.055 "rw_ios_per_sec": 0, 00:27:04.055 "rw_mbytes_per_sec": 0, 00:27:04.055 "r_mbytes_per_sec": 0, 00:27:04.055 "w_mbytes_per_sec": 0 00:27:04.055 }, 00:27:04.055 "claimed": false, 00:27:04.055 "zoned": false, 00:27:04.055 "supported_io_types": { 00:27:04.055 "read": true, 00:27:04.055 "write": true, 00:27:04.055 "unmap": true, 00:27:04.055 "flush": true, 00:27:04.055 "reset": true, 00:27:04.055 "nvme_admin": false, 00:27:04.055 "nvme_io": false, 00:27:04.055 "nvme_io_md": false, 00:27:04.055 "write_zeroes": true, 00:27:04.055 "zcopy": false, 00:27:04.055 "get_zone_info": false, 00:27:04.055 "zone_management": false, 00:27:04.055 "zone_append": false, 00:27:04.055 "compare": false, 00:27:04.055 "compare_and_write": false, 00:27:04.055 "abort": false, 00:27:04.055 "seek_hole": false, 00:27:04.055 "seek_data": false, 00:27:04.055 "copy": false, 00:27:04.055 "nvme_iov_md": false 00:27:04.055 }, 00:27:04.055 "memory_domains": [ 00:27:04.055 { 00:27:04.055 "dma_device_id": "system", 00:27:04.055 "dma_device_type": 1 00:27:04.055 }, 00:27:04.055 { 00:27:04.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.055 "dma_device_type": 2 00:27:04.055 }, 00:27:04.055 { 00:27:04.055 "dma_device_id": "system", 00:27:04.055 "dma_device_type": 1 00:27:04.055 }, 00:27:04.055 { 00:27:04.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.055 "dma_device_type": 2 00:27:04.055 }, 00:27:04.055 { 00:27:04.055 "dma_device_id": "system", 00:27:04.055 "dma_device_type": 1 00:27:04.055 }, 00:27:04.055 { 00:27:04.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.055 "dma_device_type": 2 00:27:04.055 }, 00:27:04.055 { 00:27:04.055 "dma_device_id": "system", 00:27:04.055 "dma_device_type": 1 00:27:04.055 }, 00:27:04.055 { 00:27:04.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:04.055 "dma_device_type": 2 00:27:04.055 } 00:27:04.055 ], 00:27:04.055 "driver_specific": { 00:27:04.055 "raid": { 00:27:04.055 "uuid": "625b83a2-d1e8-4307-b630-e6572bc0e124", 00:27:04.055 "strip_size_kb": 64, 00:27:04.055 "state": "online", 00:27:04.055 "raid_level": "raid0", 00:27:04.055 "superblock": true, 00:27:04.055 "num_base_bdevs": 4, 00:27:04.055 "num_base_bdevs_discovered": 4, 00:27:04.055 "num_base_bdevs_operational": 4, 00:27:04.055 "base_bdevs_list": [ 00:27:04.055 { 00:27:04.055 "name": "BaseBdev1", 00:27:04.055 "uuid": "b2a07364-c55b-45f5-86d2-b3816545c9cf", 00:27:04.055 "is_configured": true, 00:27:04.055 "data_offset": 2048, 00:27:04.055 "data_size": 63488 00:27:04.055 }, 00:27:04.055 { 00:27:04.055 "name": "BaseBdev2", 00:27:04.055 "uuid": "ed1cec01-3081-4e92-95b2-09c2dc87676d", 00:27:04.055 "is_configured": true, 00:27:04.055 "data_offset": 2048, 00:27:04.055 "data_size": 63488 00:27:04.055 }, 00:27:04.055 { 00:27:04.056 "name": "BaseBdev3", 00:27:04.056 "uuid": "b13aa00b-4c57-4d5a-8647-6ceecc7c8a8b", 00:27:04.056 "is_configured": true, 00:27:04.056 "data_offset": 2048, 00:27:04.056 "data_size": 63488 00:27:04.056 }, 00:27:04.056 { 00:27:04.056 "name": "BaseBdev4", 00:27:04.056 "uuid": "39d267a6-d566-4784-ba02-91a22dd59e0f", 00:27:04.056 "is_configured": true, 00:27:04.056 "data_offset": 2048, 00:27:04.056 "data_size": 63488 00:27:04.056 } 00:27:04.056 ] 00:27:04.056 } 00:27:04.056 } 00:27:04.056 }' 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:04.056 BaseBdev2 00:27:04.056 BaseBdev3 00:27:04.056 BaseBdev4' 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.056 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.315 [2024-10-28 13:38:18.368754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:04.315 [2024-10-28 13:38:18.368940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:04.315 [2024-10-28 13:38:18.369068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:04.315 "name": "Existed_Raid", 00:27:04.315 "uuid": "625b83a2-d1e8-4307-b630-e6572bc0e124", 00:27:04.315 "strip_size_kb": 64, 00:27:04.315 "state": "offline", 00:27:04.315 "raid_level": "raid0", 00:27:04.315 "superblock": true, 00:27:04.315 "num_base_bdevs": 4, 00:27:04.315 "num_base_bdevs_discovered": 3, 00:27:04.315 "num_base_bdevs_operational": 3, 00:27:04.315 "base_bdevs_list": [ 00:27:04.315 { 00:27:04.315 "name": null, 00:27:04.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.315 "is_configured": false, 00:27:04.315 "data_offset": 0, 00:27:04.315 "data_size": 63488 00:27:04.315 }, 00:27:04.315 { 00:27:04.315 "name": "BaseBdev2", 00:27:04.315 "uuid": "ed1cec01-3081-4e92-95b2-09c2dc87676d", 00:27:04.315 "is_configured": true, 00:27:04.315 "data_offset": 2048, 00:27:04.315 "data_size": 63488 00:27:04.315 }, 00:27:04.315 { 00:27:04.315 "name": "BaseBdev3", 00:27:04.315 "uuid": "b13aa00b-4c57-4d5a-8647-6ceecc7c8a8b", 00:27:04.315 "is_configured": true, 00:27:04.315 "data_offset": 2048, 00:27:04.315 "data_size": 63488 00:27:04.315 }, 00:27:04.315 { 00:27:04.315 "name": "BaseBdev4", 00:27:04.315 "uuid": "39d267a6-d566-4784-ba02-91a22dd59e0f", 00:27:04.315 "is_configured": true, 00:27:04.315 "data_offset": 2048, 00:27:04.315 "data_size": 63488 00:27:04.315 } 00:27:04.315 ] 00:27:04.315 }' 00:27:04.315 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:04.316 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.882 [2024-10-28 13:38:18.927035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.882 13:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.882 [2024-10-28 13:38:19.005747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:04.882 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.882 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:04.882 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:04.882 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.882 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.882 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:04.882 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.142 [2024-10-28 13:38:19.084285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:05.142 [2024-10-28 13:38:19.084438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.142 BaseBdev2 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.142 [ 00:27:05.142 { 00:27:05.142 "name": "BaseBdev2", 00:27:05.142 "aliases": [ 00:27:05.142 "124774a4-5c04-4c76-adbd-61a3882ba8be" 00:27:05.142 ], 00:27:05.142 "product_name": "Malloc disk", 00:27:05.142 "block_size": 512, 00:27:05.142 "num_blocks": 65536, 00:27:05.142 "uuid": "124774a4-5c04-4c76-adbd-61a3882ba8be", 00:27:05.142 "assigned_rate_limits": { 00:27:05.142 "rw_ios_per_sec": 0, 00:27:05.142 "rw_mbytes_per_sec": 0, 00:27:05.142 "r_mbytes_per_sec": 0, 00:27:05.142 "w_mbytes_per_sec": 0 00:27:05.142 }, 00:27:05.142 "claimed": false, 00:27:05.142 "zoned": false, 00:27:05.142 "supported_io_types": { 00:27:05.142 "read": true, 00:27:05.142 "write": true, 00:27:05.142 "unmap": true, 00:27:05.142 "flush": true, 00:27:05.142 "reset": true, 00:27:05.142 "nvme_admin": false, 00:27:05.142 "nvme_io": false, 00:27:05.142 "nvme_io_md": false, 00:27:05.142 "write_zeroes": true, 00:27:05.142 "zcopy": true, 00:27:05.142 "get_zone_info": false, 00:27:05.142 "zone_management": false, 00:27:05.142 "zone_append": false, 00:27:05.142 "compare": false, 00:27:05.142 "compare_and_write": false, 00:27:05.142 "abort": true, 00:27:05.142 "seek_hole": false, 00:27:05.142 "seek_data": false, 00:27:05.142 "copy": true, 00:27:05.142 "nvme_iov_md": false 00:27:05.142 }, 00:27:05.142 "memory_domains": [ 00:27:05.142 { 00:27:05.142 "dma_device_id": "system", 00:27:05.142 "dma_device_type": 1 00:27:05.142 }, 00:27:05.142 { 00:27:05.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.142 "dma_device_type": 2 00:27:05.142 } 00:27:05.142 ], 00:27:05.142 "driver_specific": {} 00:27:05.142 } 00:27:05.142 ] 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.142 BaseBdev3 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.142 [ 00:27:05.142 { 00:27:05.142 "name": "BaseBdev3", 00:27:05.142 "aliases": [ 00:27:05.142 "4c017f04-67e3-49e2-9467-78ece433d2d1" 00:27:05.142 ], 00:27:05.142 "product_name": "Malloc disk", 00:27:05.142 "block_size": 512, 00:27:05.142 "num_blocks": 65536, 00:27:05.142 "uuid": "4c017f04-67e3-49e2-9467-78ece433d2d1", 00:27:05.142 "assigned_rate_limits": { 00:27:05.142 "rw_ios_per_sec": 0, 00:27:05.142 "rw_mbytes_per_sec": 0, 00:27:05.142 "r_mbytes_per_sec": 0, 00:27:05.142 "w_mbytes_per_sec": 0 00:27:05.142 }, 00:27:05.142 "claimed": false, 00:27:05.142 "zoned": false, 00:27:05.142 "supported_io_types": { 00:27:05.142 "read": true, 00:27:05.142 "write": true, 00:27:05.142 "unmap": true, 00:27:05.142 "flush": true, 00:27:05.142 "reset": true, 00:27:05.142 "nvme_admin": false, 00:27:05.142 "nvme_io": false, 00:27:05.142 "nvme_io_md": false, 00:27:05.142 "write_zeroes": true, 00:27:05.142 "zcopy": true, 00:27:05.142 "get_zone_info": false, 00:27:05.142 "zone_management": false, 00:27:05.142 "zone_append": false, 00:27:05.142 "compare": false, 00:27:05.142 "compare_and_write": false, 00:27:05.142 "abort": true, 00:27:05.142 "seek_hole": false, 00:27:05.142 "seek_data": false, 00:27:05.142 "copy": true, 00:27:05.142 "nvme_iov_md": false 00:27:05.142 }, 00:27:05.142 "memory_domains": [ 00:27:05.142 { 00:27:05.142 "dma_device_id": "system", 00:27:05.142 "dma_device_type": 1 00:27:05.142 }, 00:27:05.142 { 00:27:05.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.142 "dma_device_type": 2 00:27:05.142 } 00:27:05.142 ], 00:27:05.142 "driver_specific": {} 00:27:05.142 } 00:27:05.142 ] 00:27:05.142 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.143 BaseBdev4 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.143 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.143 [ 00:27:05.143 { 00:27:05.143 "name": "BaseBdev4", 00:27:05.143 "aliases": [ 00:27:05.143 "e642d229-cab8-4677-8ff7-8c56073a6e5e" 00:27:05.143 ], 00:27:05.143 "product_name": "Malloc disk", 00:27:05.143 "block_size": 512, 00:27:05.143 "num_blocks": 65536, 00:27:05.143 "uuid": "e642d229-cab8-4677-8ff7-8c56073a6e5e", 00:27:05.143 "assigned_rate_limits": { 00:27:05.143 "rw_ios_per_sec": 0, 00:27:05.143 "rw_mbytes_per_sec": 0, 00:27:05.143 "r_mbytes_per_sec": 0, 00:27:05.143 "w_mbytes_per_sec": 0 00:27:05.143 }, 00:27:05.143 "claimed": false, 00:27:05.143 "zoned": false, 00:27:05.143 "supported_io_types": { 00:27:05.143 "read": true, 00:27:05.143 "write": true, 00:27:05.143 "unmap": true, 00:27:05.143 "flush": true, 00:27:05.143 "reset": true, 00:27:05.401 "nvme_admin": false, 00:27:05.401 "nvme_io": false, 00:27:05.401 "nvme_io_md": false, 00:27:05.401 "write_zeroes": true, 00:27:05.401 "zcopy": true, 00:27:05.401 "get_zone_info": false, 00:27:05.401 "zone_management": false, 00:27:05.401 "zone_append": false, 00:27:05.401 "compare": false, 00:27:05.401 "compare_and_write": false, 00:27:05.401 "abort": true, 00:27:05.401 "seek_hole": false, 00:27:05.401 "seek_data": false, 00:27:05.401 "copy": true, 00:27:05.401 "nvme_iov_md": false 00:27:05.401 }, 00:27:05.401 "memory_domains": [ 00:27:05.401 { 00:27:05.401 "dma_device_id": "system", 00:27:05.401 "dma_device_type": 1 00:27:05.401 }, 00:27:05.401 { 00:27:05.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.401 "dma_device_type": 2 00:27:05.401 } 00:27:05.401 ], 00:27:05.401 "driver_specific": {} 00:27:05.401 } 00:27:05.401 ] 00:27:05.401 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.401 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:05.401 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:05.401 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:05.401 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:05.401 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.401 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.402 [2024-10-28 13:38:19.310600] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:05.402 [2024-10-28 13:38:19.310688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:05.402 [2024-10-28 13:38:19.310724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:05.402 [2024-10-28 13:38:19.314044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:05.402 [2024-10-28 13:38:19.314123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:05.402 "name": "Existed_Raid", 00:27:05.402 "uuid": "5c2ac3eb-e3e0-4044-bbbc-b1fb655356e9", 00:27:05.402 "strip_size_kb": 64, 00:27:05.402 "state": "configuring", 00:27:05.402 "raid_level": "raid0", 00:27:05.402 "superblock": true, 00:27:05.402 "num_base_bdevs": 4, 00:27:05.402 "num_base_bdevs_discovered": 3, 00:27:05.402 "num_base_bdevs_operational": 4, 00:27:05.402 "base_bdevs_list": [ 00:27:05.402 { 00:27:05.402 "name": "BaseBdev1", 00:27:05.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.402 "is_configured": false, 00:27:05.402 "data_offset": 0, 00:27:05.402 "data_size": 0 00:27:05.402 }, 00:27:05.402 { 00:27:05.402 "name": "BaseBdev2", 00:27:05.402 "uuid": "124774a4-5c04-4c76-adbd-61a3882ba8be", 00:27:05.402 "is_configured": true, 00:27:05.402 "data_offset": 2048, 00:27:05.402 "data_size": 63488 00:27:05.402 }, 00:27:05.402 { 00:27:05.402 "name": "BaseBdev3", 00:27:05.402 "uuid": "4c017f04-67e3-49e2-9467-78ece433d2d1", 00:27:05.402 "is_configured": true, 00:27:05.402 "data_offset": 2048, 00:27:05.402 "data_size": 63488 00:27:05.402 }, 00:27:05.402 { 00:27:05.402 "name": "BaseBdev4", 00:27:05.402 "uuid": "e642d229-cab8-4677-8ff7-8c56073a6e5e", 00:27:05.402 "is_configured": true, 00:27:05.402 "data_offset": 2048, 00:27:05.402 "data_size": 63488 00:27:05.402 } 00:27:05.402 ] 00:27:05.402 }' 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:05.402 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.979 [2024-10-28 13:38:19.850732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:05.979 "name": "Existed_Raid", 00:27:05.979 "uuid": "5c2ac3eb-e3e0-4044-bbbc-b1fb655356e9", 00:27:05.979 "strip_size_kb": 64, 00:27:05.979 "state": "configuring", 00:27:05.979 "raid_level": "raid0", 00:27:05.979 "superblock": true, 00:27:05.979 "num_base_bdevs": 4, 00:27:05.979 "num_base_bdevs_discovered": 2, 00:27:05.979 "num_base_bdevs_operational": 4, 00:27:05.979 "base_bdevs_list": [ 00:27:05.979 { 00:27:05.979 "name": "BaseBdev1", 00:27:05.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.979 "is_configured": false, 00:27:05.979 "data_offset": 0, 00:27:05.979 "data_size": 0 00:27:05.979 }, 00:27:05.979 { 00:27:05.979 "name": null, 00:27:05.979 "uuid": "124774a4-5c04-4c76-adbd-61a3882ba8be", 00:27:05.979 "is_configured": false, 00:27:05.979 "data_offset": 0, 00:27:05.979 "data_size": 63488 00:27:05.979 }, 00:27:05.979 { 00:27:05.979 "name": "BaseBdev3", 00:27:05.979 "uuid": "4c017f04-67e3-49e2-9467-78ece433d2d1", 00:27:05.979 "is_configured": true, 00:27:05.979 "data_offset": 2048, 00:27:05.979 "data_size": 63488 00:27:05.979 }, 00:27:05.979 { 00:27:05.979 "name": "BaseBdev4", 00:27:05.979 "uuid": "e642d229-cab8-4677-8ff7-8c56073a6e5e", 00:27:05.979 "is_configured": true, 00:27:05.979 "data_offset": 2048, 00:27:05.979 "data_size": 63488 00:27:05.979 } 00:27:05.979 ] 00:27:05.979 }' 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:05.979 13:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.546 [2024-10-28 13:38:20.460901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:06.546 BaseBdev1 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.546 [ 00:27:06.546 { 00:27:06.546 "name": "BaseBdev1", 00:27:06.546 "aliases": [ 00:27:06.546 "6a8b197f-3ea8-4467-9423-0d8131255479" 00:27:06.546 ], 00:27:06.546 "product_name": "Malloc disk", 00:27:06.546 "block_size": 512, 00:27:06.546 "num_blocks": 65536, 00:27:06.546 "uuid": "6a8b197f-3ea8-4467-9423-0d8131255479", 00:27:06.546 "assigned_rate_limits": { 00:27:06.546 "rw_ios_per_sec": 0, 00:27:06.546 "rw_mbytes_per_sec": 0, 00:27:06.546 "r_mbytes_per_sec": 0, 00:27:06.546 "w_mbytes_per_sec": 0 00:27:06.546 }, 00:27:06.546 "claimed": true, 00:27:06.546 "claim_type": "exclusive_write", 00:27:06.546 "zoned": false, 00:27:06.546 "supported_io_types": { 00:27:06.546 "read": true, 00:27:06.546 "write": true, 00:27:06.546 "unmap": true, 00:27:06.546 "flush": true, 00:27:06.546 "reset": true, 00:27:06.546 "nvme_admin": false, 00:27:06.546 "nvme_io": false, 00:27:06.546 "nvme_io_md": false, 00:27:06.546 "write_zeroes": true, 00:27:06.546 "zcopy": true, 00:27:06.546 "get_zone_info": false, 00:27:06.546 "zone_management": false, 00:27:06.546 "zone_append": false, 00:27:06.546 "compare": false, 00:27:06.546 "compare_and_write": false, 00:27:06.546 "abort": true, 00:27:06.546 "seek_hole": false, 00:27:06.546 "seek_data": false, 00:27:06.546 "copy": true, 00:27:06.546 "nvme_iov_md": false 00:27:06.546 }, 00:27:06.546 "memory_domains": [ 00:27:06.546 { 00:27:06.546 "dma_device_id": "system", 00:27:06.546 "dma_device_type": 1 00:27:06.546 }, 00:27:06.546 { 00:27:06.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.546 "dma_device_type": 2 00:27:06.546 } 00:27:06.546 ], 00:27:06.546 "driver_specific": {} 00:27:06.546 } 00:27:06.546 ] 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:06.546 "name": "Existed_Raid", 00:27:06.546 "uuid": "5c2ac3eb-e3e0-4044-bbbc-b1fb655356e9", 00:27:06.546 "strip_size_kb": 64, 00:27:06.546 "state": "configuring", 00:27:06.546 "raid_level": "raid0", 00:27:06.546 "superblock": true, 00:27:06.546 "num_base_bdevs": 4, 00:27:06.546 "num_base_bdevs_discovered": 3, 00:27:06.546 "num_base_bdevs_operational": 4, 00:27:06.546 "base_bdevs_list": [ 00:27:06.546 { 00:27:06.546 "name": "BaseBdev1", 00:27:06.546 "uuid": "6a8b197f-3ea8-4467-9423-0d8131255479", 00:27:06.546 "is_configured": true, 00:27:06.546 "data_offset": 2048, 00:27:06.546 "data_size": 63488 00:27:06.546 }, 00:27:06.546 { 00:27:06.546 "name": null, 00:27:06.546 "uuid": "124774a4-5c04-4c76-adbd-61a3882ba8be", 00:27:06.546 "is_configured": false, 00:27:06.546 "data_offset": 0, 00:27:06.546 "data_size": 63488 00:27:06.546 }, 00:27:06.546 { 00:27:06.546 "name": "BaseBdev3", 00:27:06.546 "uuid": "4c017f04-67e3-49e2-9467-78ece433d2d1", 00:27:06.546 "is_configured": true, 00:27:06.546 "data_offset": 2048, 00:27:06.546 "data_size": 63488 00:27:06.546 }, 00:27:06.546 { 00:27:06.546 "name": "BaseBdev4", 00:27:06.546 "uuid": "e642d229-cab8-4677-8ff7-8c56073a6e5e", 00:27:06.546 "is_configured": true, 00:27:06.546 "data_offset": 2048, 00:27:06.546 "data_size": 63488 00:27:06.546 } 00:27:06.546 ] 00:27:06.546 }' 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:06.546 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.111 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.111 13:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:07.111 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.111 13:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.111 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.111 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.112 [2024-10-28 13:38:21.053211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:07.112 "name": "Existed_Raid", 00:27:07.112 "uuid": "5c2ac3eb-e3e0-4044-bbbc-b1fb655356e9", 00:27:07.112 "strip_size_kb": 64, 00:27:07.112 "state": "configuring", 00:27:07.112 "raid_level": "raid0", 00:27:07.112 "superblock": true, 00:27:07.112 "num_base_bdevs": 4, 00:27:07.112 "num_base_bdevs_discovered": 2, 00:27:07.112 "num_base_bdevs_operational": 4, 00:27:07.112 "base_bdevs_list": [ 00:27:07.112 { 00:27:07.112 "name": "BaseBdev1", 00:27:07.112 "uuid": "6a8b197f-3ea8-4467-9423-0d8131255479", 00:27:07.112 "is_configured": true, 00:27:07.112 "data_offset": 2048, 00:27:07.112 "data_size": 63488 00:27:07.112 }, 00:27:07.112 { 00:27:07.112 "name": null, 00:27:07.112 "uuid": "124774a4-5c04-4c76-adbd-61a3882ba8be", 00:27:07.112 "is_configured": false, 00:27:07.112 "data_offset": 0, 00:27:07.112 "data_size": 63488 00:27:07.112 }, 00:27:07.112 { 00:27:07.112 "name": null, 00:27:07.112 "uuid": "4c017f04-67e3-49e2-9467-78ece433d2d1", 00:27:07.112 "is_configured": false, 00:27:07.112 "data_offset": 0, 00:27:07.112 "data_size": 63488 00:27:07.112 }, 00:27:07.112 { 00:27:07.112 "name": "BaseBdev4", 00:27:07.112 "uuid": "e642d229-cab8-4677-8ff7-8c56073a6e5e", 00:27:07.112 "is_configured": true, 00:27:07.112 "data_offset": 2048, 00:27:07.112 "data_size": 63488 00:27:07.112 } 00:27:07.112 ] 00:27:07.112 }' 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:07.112 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.678 [2024-10-28 13:38:21.653459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:07.678 "name": "Existed_Raid", 00:27:07.678 "uuid": "5c2ac3eb-e3e0-4044-bbbc-b1fb655356e9", 00:27:07.678 "strip_size_kb": 64, 00:27:07.678 "state": "configuring", 00:27:07.678 "raid_level": "raid0", 00:27:07.678 "superblock": true, 00:27:07.678 "num_base_bdevs": 4, 00:27:07.678 "num_base_bdevs_discovered": 3, 00:27:07.678 "num_base_bdevs_operational": 4, 00:27:07.678 "base_bdevs_list": [ 00:27:07.678 { 00:27:07.678 "name": "BaseBdev1", 00:27:07.678 "uuid": "6a8b197f-3ea8-4467-9423-0d8131255479", 00:27:07.678 "is_configured": true, 00:27:07.678 "data_offset": 2048, 00:27:07.678 "data_size": 63488 00:27:07.678 }, 00:27:07.678 { 00:27:07.678 "name": null, 00:27:07.678 "uuid": "124774a4-5c04-4c76-adbd-61a3882ba8be", 00:27:07.678 "is_configured": false, 00:27:07.678 "data_offset": 0, 00:27:07.678 "data_size": 63488 00:27:07.678 }, 00:27:07.678 { 00:27:07.678 "name": "BaseBdev3", 00:27:07.678 "uuid": "4c017f04-67e3-49e2-9467-78ece433d2d1", 00:27:07.678 "is_configured": true, 00:27:07.678 "data_offset": 2048, 00:27:07.678 "data_size": 63488 00:27:07.678 }, 00:27:07.678 { 00:27:07.678 "name": "BaseBdev4", 00:27:07.678 "uuid": "e642d229-cab8-4677-8ff7-8c56073a6e5e", 00:27:07.678 "is_configured": true, 00:27:07.678 "data_offset": 2048, 00:27:07.678 "data_size": 63488 00:27:07.678 } 00:27:07.678 ] 00:27:07.678 }' 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:07.678 13:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.262 [2024-10-28 13:38:22.241673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:08.262 "name": "Existed_Raid", 00:27:08.262 "uuid": "5c2ac3eb-e3e0-4044-bbbc-b1fb655356e9", 00:27:08.262 "strip_size_kb": 64, 00:27:08.262 "state": "configuring", 00:27:08.262 "raid_level": "raid0", 00:27:08.262 "superblock": true, 00:27:08.262 "num_base_bdevs": 4, 00:27:08.262 "num_base_bdevs_discovered": 2, 00:27:08.262 "num_base_bdevs_operational": 4, 00:27:08.262 "base_bdevs_list": [ 00:27:08.262 { 00:27:08.262 "name": null, 00:27:08.262 "uuid": "6a8b197f-3ea8-4467-9423-0d8131255479", 00:27:08.262 "is_configured": false, 00:27:08.262 "data_offset": 0, 00:27:08.262 "data_size": 63488 00:27:08.262 }, 00:27:08.262 { 00:27:08.262 "name": null, 00:27:08.262 "uuid": "124774a4-5c04-4c76-adbd-61a3882ba8be", 00:27:08.262 "is_configured": false, 00:27:08.262 "data_offset": 0, 00:27:08.262 "data_size": 63488 00:27:08.262 }, 00:27:08.262 { 00:27:08.262 "name": "BaseBdev3", 00:27:08.262 "uuid": "4c017f04-67e3-49e2-9467-78ece433d2d1", 00:27:08.262 "is_configured": true, 00:27:08.262 "data_offset": 2048, 00:27:08.262 "data_size": 63488 00:27:08.262 }, 00:27:08.262 { 00:27:08.262 "name": "BaseBdev4", 00:27:08.262 "uuid": "e642d229-cab8-4677-8ff7-8c56073a6e5e", 00:27:08.262 "is_configured": true, 00:27:08.262 "data_offset": 2048, 00:27:08.262 "data_size": 63488 00:27:08.262 } 00:27:08.262 ] 00:27:08.262 }' 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:08.262 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.849 [2024-10-28 13:38:22.808918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:08.849 "name": "Existed_Raid", 00:27:08.849 "uuid": "5c2ac3eb-e3e0-4044-bbbc-b1fb655356e9", 00:27:08.849 "strip_size_kb": 64, 00:27:08.849 "state": "configuring", 00:27:08.849 "raid_level": "raid0", 00:27:08.849 "superblock": true, 00:27:08.849 "num_base_bdevs": 4, 00:27:08.849 "num_base_bdevs_discovered": 3, 00:27:08.849 "num_base_bdevs_operational": 4, 00:27:08.849 "base_bdevs_list": [ 00:27:08.849 { 00:27:08.849 "name": null, 00:27:08.849 "uuid": "6a8b197f-3ea8-4467-9423-0d8131255479", 00:27:08.849 "is_configured": false, 00:27:08.849 "data_offset": 0, 00:27:08.849 "data_size": 63488 00:27:08.849 }, 00:27:08.849 { 00:27:08.849 "name": "BaseBdev2", 00:27:08.849 "uuid": "124774a4-5c04-4c76-adbd-61a3882ba8be", 00:27:08.849 "is_configured": true, 00:27:08.849 "data_offset": 2048, 00:27:08.849 "data_size": 63488 00:27:08.849 }, 00:27:08.849 { 00:27:08.849 "name": "BaseBdev3", 00:27:08.849 "uuid": "4c017f04-67e3-49e2-9467-78ece433d2d1", 00:27:08.849 "is_configured": true, 00:27:08.849 "data_offset": 2048, 00:27:08.849 "data_size": 63488 00:27:08.849 }, 00:27:08.849 { 00:27:08.849 "name": "BaseBdev4", 00:27:08.849 "uuid": "e642d229-cab8-4677-8ff7-8c56073a6e5e", 00:27:08.849 "is_configured": true, 00:27:08.849 "data_offset": 2048, 00:27:08.849 "data_size": 63488 00:27:08.849 } 00:27:08.849 ] 00:27:08.849 }' 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:08.849 13:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6a8b197f-3ea8-4467-9423-0d8131255479 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.417 [2024-10-28 13:38:23.390818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:09.417 NewBaseBdev 00:27:09.417 [2024-10-28 13:38:23.391274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:09.417 [2024-10-28 13:38:23.391309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:09.417 [2024-10-28 13:38:23.391642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:27:09.417 [2024-10-28 13:38:23.391796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:09.417 [2024-10-28 13:38:23.391812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:09.417 [2024-10-28 13:38:23.391984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.417 [ 00:27:09.417 { 00:27:09.417 "name": "NewBaseBdev", 00:27:09.417 "aliases": [ 00:27:09.417 "6a8b197f-3ea8-4467-9423-0d8131255479" 00:27:09.417 ], 00:27:09.417 "product_name": "Malloc disk", 00:27:09.417 "block_size": 512, 00:27:09.417 "num_blocks": 65536, 00:27:09.417 "uuid": "6a8b197f-3ea8-4467-9423-0d8131255479", 00:27:09.417 "assigned_rate_limits": { 00:27:09.417 "rw_ios_per_sec": 0, 00:27:09.417 "rw_mbytes_per_sec": 0, 00:27:09.417 "r_mbytes_per_sec": 0, 00:27:09.417 "w_mbytes_per_sec": 0 00:27:09.417 }, 00:27:09.417 "claimed": true, 00:27:09.417 "claim_type": "exclusive_write", 00:27:09.417 "zoned": false, 00:27:09.417 "supported_io_types": { 00:27:09.417 "read": true, 00:27:09.417 "write": true, 00:27:09.417 "unmap": true, 00:27:09.417 "flush": true, 00:27:09.417 "reset": true, 00:27:09.417 "nvme_admin": false, 00:27:09.417 "nvme_io": false, 00:27:09.417 "nvme_io_md": false, 00:27:09.417 "write_zeroes": true, 00:27:09.417 "zcopy": true, 00:27:09.417 "get_zone_info": false, 00:27:09.417 "zone_management": false, 00:27:09.417 "zone_append": false, 00:27:09.417 "compare": false, 00:27:09.417 "compare_and_write": false, 00:27:09.417 "abort": true, 00:27:09.417 "seek_hole": false, 00:27:09.417 "seek_data": false, 00:27:09.417 "copy": true, 00:27:09.417 "nvme_iov_md": false 00:27:09.417 }, 00:27:09.417 "memory_domains": [ 00:27:09.417 { 00:27:09.417 "dma_device_id": "system", 00:27:09.417 "dma_device_type": 1 00:27:09.417 }, 00:27:09.417 { 00:27:09.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.417 "dma_device_type": 2 00:27:09.417 } 00:27:09.417 ], 00:27:09.417 "driver_specific": {} 00:27:09.417 } 00:27:09.417 ] 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.417 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:09.418 "name": "Existed_Raid", 00:27:09.418 "uuid": "5c2ac3eb-e3e0-4044-bbbc-b1fb655356e9", 00:27:09.418 "strip_size_kb": 64, 00:27:09.418 "state": "online", 00:27:09.418 "raid_level": "raid0", 00:27:09.418 "superblock": true, 00:27:09.418 "num_base_bdevs": 4, 00:27:09.418 "num_base_bdevs_discovered": 4, 00:27:09.418 "num_base_bdevs_operational": 4, 00:27:09.418 "base_bdevs_list": [ 00:27:09.418 { 00:27:09.418 "name": "NewBaseBdev", 00:27:09.418 "uuid": "6a8b197f-3ea8-4467-9423-0d8131255479", 00:27:09.418 "is_configured": true, 00:27:09.418 "data_offset": 2048, 00:27:09.418 "data_size": 63488 00:27:09.418 }, 00:27:09.418 { 00:27:09.418 "name": "BaseBdev2", 00:27:09.418 "uuid": "124774a4-5c04-4c76-adbd-61a3882ba8be", 00:27:09.418 "is_configured": true, 00:27:09.418 "data_offset": 2048, 00:27:09.418 "data_size": 63488 00:27:09.418 }, 00:27:09.418 { 00:27:09.418 "name": "BaseBdev3", 00:27:09.418 "uuid": "4c017f04-67e3-49e2-9467-78ece433d2d1", 00:27:09.418 "is_configured": true, 00:27:09.418 "data_offset": 2048, 00:27:09.418 "data_size": 63488 00:27:09.418 }, 00:27:09.418 { 00:27:09.418 "name": "BaseBdev4", 00:27:09.418 "uuid": "e642d229-cab8-4677-8ff7-8c56073a6e5e", 00:27:09.418 "is_configured": true, 00:27:09.418 "data_offset": 2048, 00:27:09.418 "data_size": 63488 00:27:09.418 } 00:27:09.418 ] 00:27:09.418 }' 00:27:09.418 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:09.418 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.985 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:27:09.985 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:09.985 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:09.985 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:09.985 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:27:09.985 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:09.985 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:09.985 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.985 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.985 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:09.985 [2024-10-28 13:38:23.932582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:09.985 13:38:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.985 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:09.985 "name": "Existed_Raid", 00:27:09.985 "aliases": [ 00:27:09.985 "5c2ac3eb-e3e0-4044-bbbc-b1fb655356e9" 00:27:09.985 ], 00:27:09.985 "product_name": "Raid Volume", 00:27:09.985 "block_size": 512, 00:27:09.985 "num_blocks": 253952, 00:27:09.985 "uuid": "5c2ac3eb-e3e0-4044-bbbc-b1fb655356e9", 00:27:09.985 "assigned_rate_limits": { 00:27:09.985 "rw_ios_per_sec": 0, 00:27:09.985 "rw_mbytes_per_sec": 0, 00:27:09.985 "r_mbytes_per_sec": 0, 00:27:09.985 "w_mbytes_per_sec": 0 00:27:09.985 }, 00:27:09.985 "claimed": false, 00:27:09.985 "zoned": false, 00:27:09.985 "supported_io_types": { 00:27:09.985 "read": true, 00:27:09.985 "write": true, 00:27:09.985 "unmap": true, 00:27:09.985 "flush": true, 00:27:09.985 "reset": true, 00:27:09.985 "nvme_admin": false, 00:27:09.985 "nvme_io": false, 00:27:09.985 "nvme_io_md": false, 00:27:09.985 "write_zeroes": true, 00:27:09.985 "zcopy": false, 00:27:09.985 "get_zone_info": false, 00:27:09.985 "zone_management": false, 00:27:09.985 "zone_append": false, 00:27:09.985 "compare": false, 00:27:09.985 "compare_and_write": false, 00:27:09.985 "abort": false, 00:27:09.985 "seek_hole": false, 00:27:09.985 "seek_data": false, 00:27:09.985 "copy": false, 00:27:09.985 "nvme_iov_md": false 00:27:09.985 }, 00:27:09.985 "memory_domains": [ 00:27:09.985 { 00:27:09.985 "dma_device_id": "system", 00:27:09.985 "dma_device_type": 1 00:27:09.985 }, 00:27:09.985 { 00:27:09.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.985 "dma_device_type": 2 00:27:09.985 }, 00:27:09.985 { 00:27:09.985 "dma_device_id": "system", 00:27:09.985 "dma_device_type": 1 00:27:09.985 }, 00:27:09.985 { 00:27:09.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.985 "dma_device_type": 2 00:27:09.985 }, 00:27:09.985 { 00:27:09.985 "dma_device_id": "system", 00:27:09.985 "dma_device_type": 1 00:27:09.985 }, 00:27:09.985 { 00:27:09.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.985 "dma_device_type": 2 00:27:09.985 }, 00:27:09.985 { 00:27:09.985 "dma_device_id": "system", 00:27:09.985 "dma_device_type": 1 00:27:09.985 }, 00:27:09.985 { 00:27:09.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.985 "dma_device_type": 2 00:27:09.985 } 00:27:09.985 ], 00:27:09.985 "driver_specific": { 00:27:09.985 "raid": { 00:27:09.985 "uuid": "5c2ac3eb-e3e0-4044-bbbc-b1fb655356e9", 00:27:09.985 "strip_size_kb": 64, 00:27:09.985 "state": "online", 00:27:09.985 "raid_level": "raid0", 00:27:09.985 "superblock": true, 00:27:09.985 "num_base_bdevs": 4, 00:27:09.985 "num_base_bdevs_discovered": 4, 00:27:09.985 "num_base_bdevs_operational": 4, 00:27:09.985 "base_bdevs_list": [ 00:27:09.985 { 00:27:09.985 "name": "NewBaseBdev", 00:27:09.985 "uuid": "6a8b197f-3ea8-4467-9423-0d8131255479", 00:27:09.985 "is_configured": true, 00:27:09.985 "data_offset": 2048, 00:27:09.985 "data_size": 63488 00:27:09.985 }, 00:27:09.985 { 00:27:09.985 "name": "BaseBdev2", 00:27:09.985 "uuid": "124774a4-5c04-4c76-adbd-61a3882ba8be", 00:27:09.985 "is_configured": true, 00:27:09.985 "data_offset": 2048, 00:27:09.985 "data_size": 63488 00:27:09.985 }, 00:27:09.985 { 00:27:09.985 "name": "BaseBdev3", 00:27:09.985 "uuid": "4c017f04-67e3-49e2-9467-78ece433d2d1", 00:27:09.985 "is_configured": true, 00:27:09.985 "data_offset": 2048, 00:27:09.985 "data_size": 63488 00:27:09.985 }, 00:27:09.985 { 00:27:09.985 "name": "BaseBdev4", 00:27:09.985 "uuid": "e642d229-cab8-4677-8ff7-8c56073a6e5e", 00:27:09.985 "is_configured": true, 00:27:09.985 "data_offset": 2048, 00:27:09.985 "data_size": 63488 00:27:09.985 } 00:27:09.985 ] 00:27:09.985 } 00:27:09.985 } 00:27:09.985 }' 00:27:09.985 13:38:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:09.985 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:27:09.985 BaseBdev2 00:27:09.985 BaseBdev3 00:27:09.985 BaseBdev4' 00:27:09.985 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:09.985 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:09.985 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:09.986 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:27:09.986 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:09.986 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.986 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.986 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:10.244 [2024-10-28 13:38:24.312207] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:10.244 [2024-10-28 13:38:24.312252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:10.244 [2024-10-28 13:38:24.312346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:10.244 [2024-10-28 13:38:24.312428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:10.244 [2024-10-28 13:38:24.312454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82829 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82829 ']' 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 82829 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82829 00:27:10.244 killing process with pid 82829 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82829' 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 82829 00:27:10.244 [2024-10-28 13:38:24.358085] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:10.244 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 82829 00:27:10.502 [2024-10-28 13:38:24.405101] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:10.760 13:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:27:10.760 00:27:10.760 real 0m11.210s 00:27:10.760 user 0m19.629s 00:27:10.760 sys 0m1.867s 00:27:10.760 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:10.760 ************************************ 00:27:10.760 END TEST raid_state_function_test_sb 00:27:10.760 ************************************ 00:27:10.760 13:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:10.760 13:38:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:27:10.760 13:38:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:10.760 13:38:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:10.760 13:38:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:10.760 ************************************ 00:27:10.760 START TEST raid_superblock_test 00:27:10.760 ************************************ 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83494 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:10.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83494 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83494 ']' 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:10.760 13:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.760 [2024-10-28 13:38:24.854269] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:27:10.760 [2024-10-28 13:38:24.854454] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83494 ] 00:27:11.020 [2024-10-28 13:38:25.010668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:11.020 [2024-10-28 13:38:25.038585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.020 [2024-10-28 13:38:25.093833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.020 [2024-10-28 13:38:25.157049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:11.020 [2024-10-28 13:38:25.157094] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.956 malloc1 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.956 [2024-10-28 13:38:25.915126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:11.956 [2024-10-28 13:38:25.915614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.956 [2024-10-28 13:38:25.915726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:11.956 [2024-10-28 13:38:25.916034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.956 [2024-10-28 13:38:25.920119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.956 [2024-10-28 13:38:25.920353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:11.956 pt1 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.956 malloc2 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.956 [2024-10-28 13:38:25.954692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:11.956 [2024-10-28 13:38:25.954794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.956 [2024-10-28 13:38:25.954834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:11.956 [2024-10-28 13:38:25.954854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.956 [2024-10-28 13:38:25.958677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.956 [2024-10-28 13:38:25.958899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:11.956 pt2 00:27:11.956 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.957 malloc3 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.957 [2024-10-28 13:38:25.993414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:11.957 [2024-10-28 13:38:25.993889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.957 [2024-10-28 13:38:25.993946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:11.957 [2024-10-28 13:38:25.993968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.957 [2024-10-28 13:38:25.998005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.957 [2024-10-28 13:38:25.998064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:11.957 pt3 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.957 13:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.957 malloc4 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.957 [2024-10-28 13:38:26.036383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:11.957 [2024-10-28 13:38:26.036490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.957 [2024-10-28 13:38:26.036532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:27:11.957 [2024-10-28 13:38:26.036554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.957 [2024-10-28 13:38:26.040356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.957 [2024-10-28 13:38:26.040439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:11.957 pt4 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.957 [2024-10-28 13:38:26.048773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:11.957 [2024-10-28 13:38:26.052222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:11.957 [2024-10-28 13:38:26.052351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:11.957 [2024-10-28 13:38:26.052464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:11.957 [2024-10-28 13:38:26.052804] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:27:11.957 [2024-10-28 13:38:26.052826] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:11.957 [2024-10-28 13:38:26.053289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:27:11.957 [2024-10-28 13:38:26.053578] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:27:11.957 [2024-10-28 13:38:26.053630] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:27:11.957 [2024-10-28 13:38:26.053833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:11.957 "name": "raid_bdev1", 00:27:11.957 "uuid": "aea7b47b-33aa-425e-9035-bc966d254e1f", 00:27:11.957 "strip_size_kb": 64, 00:27:11.957 "state": "online", 00:27:11.957 "raid_level": "raid0", 00:27:11.957 "superblock": true, 00:27:11.957 "num_base_bdevs": 4, 00:27:11.957 "num_base_bdevs_discovered": 4, 00:27:11.957 "num_base_bdevs_operational": 4, 00:27:11.957 "base_bdevs_list": [ 00:27:11.957 { 00:27:11.957 "name": "pt1", 00:27:11.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:11.957 "is_configured": true, 00:27:11.957 "data_offset": 2048, 00:27:11.957 "data_size": 63488 00:27:11.957 }, 00:27:11.957 { 00:27:11.957 "name": "pt2", 00:27:11.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:11.957 "is_configured": true, 00:27:11.957 "data_offset": 2048, 00:27:11.957 "data_size": 63488 00:27:11.957 }, 00:27:11.957 { 00:27:11.957 "name": "pt3", 00:27:11.957 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:11.957 "is_configured": true, 00:27:11.957 "data_offset": 2048, 00:27:11.957 "data_size": 63488 00:27:11.957 }, 00:27:11.957 { 00:27:11.957 "name": "pt4", 00:27:11.957 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:11.957 "is_configured": true, 00:27:11.957 "data_offset": 2048, 00:27:11.957 "data_size": 63488 00:27:11.957 } 00:27:11.957 ] 00:27:11.957 }' 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:11.957 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.523 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:12.523 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:12.523 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:12.523 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:12.523 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:12.523 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:12.523 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:12.523 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:12.523 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.523 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.523 [2024-10-28 13:38:26.601500] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:12.523 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.523 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:12.523 "name": "raid_bdev1", 00:27:12.523 "aliases": [ 00:27:12.523 "aea7b47b-33aa-425e-9035-bc966d254e1f" 00:27:12.523 ], 00:27:12.523 "product_name": "Raid Volume", 00:27:12.523 "block_size": 512, 00:27:12.523 "num_blocks": 253952, 00:27:12.523 "uuid": "aea7b47b-33aa-425e-9035-bc966d254e1f", 00:27:12.523 "assigned_rate_limits": { 00:27:12.523 "rw_ios_per_sec": 0, 00:27:12.523 "rw_mbytes_per_sec": 0, 00:27:12.523 "r_mbytes_per_sec": 0, 00:27:12.523 "w_mbytes_per_sec": 0 00:27:12.523 }, 00:27:12.523 "claimed": false, 00:27:12.523 "zoned": false, 00:27:12.523 "supported_io_types": { 00:27:12.523 "read": true, 00:27:12.524 "write": true, 00:27:12.524 "unmap": true, 00:27:12.524 "flush": true, 00:27:12.524 "reset": true, 00:27:12.524 "nvme_admin": false, 00:27:12.524 "nvme_io": false, 00:27:12.524 "nvme_io_md": false, 00:27:12.524 "write_zeroes": true, 00:27:12.524 "zcopy": false, 00:27:12.524 "get_zone_info": false, 00:27:12.524 "zone_management": false, 00:27:12.524 "zone_append": false, 00:27:12.524 "compare": false, 00:27:12.524 "compare_and_write": false, 00:27:12.524 "abort": false, 00:27:12.524 "seek_hole": false, 00:27:12.524 "seek_data": false, 00:27:12.524 "copy": false, 00:27:12.524 "nvme_iov_md": false 00:27:12.524 }, 00:27:12.524 "memory_domains": [ 00:27:12.524 { 00:27:12.524 "dma_device_id": "system", 00:27:12.524 "dma_device_type": 1 00:27:12.524 }, 00:27:12.524 { 00:27:12.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:12.524 "dma_device_type": 2 00:27:12.524 }, 00:27:12.524 { 00:27:12.524 "dma_device_id": "system", 00:27:12.524 "dma_device_type": 1 00:27:12.524 }, 00:27:12.524 { 00:27:12.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:12.524 "dma_device_type": 2 00:27:12.524 }, 00:27:12.524 { 00:27:12.524 "dma_device_id": "system", 00:27:12.524 "dma_device_type": 1 00:27:12.524 }, 00:27:12.524 { 00:27:12.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:12.524 "dma_device_type": 2 00:27:12.524 }, 00:27:12.524 { 00:27:12.524 "dma_device_id": "system", 00:27:12.524 "dma_device_type": 1 00:27:12.524 }, 00:27:12.524 { 00:27:12.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:12.524 "dma_device_type": 2 00:27:12.524 } 00:27:12.524 ], 00:27:12.524 "driver_specific": { 00:27:12.524 "raid": { 00:27:12.524 "uuid": "aea7b47b-33aa-425e-9035-bc966d254e1f", 00:27:12.524 "strip_size_kb": 64, 00:27:12.524 "state": "online", 00:27:12.524 "raid_level": "raid0", 00:27:12.524 "superblock": true, 00:27:12.524 "num_base_bdevs": 4, 00:27:12.524 "num_base_bdevs_discovered": 4, 00:27:12.524 "num_base_bdevs_operational": 4, 00:27:12.524 "base_bdevs_list": [ 00:27:12.524 { 00:27:12.524 "name": "pt1", 00:27:12.524 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:12.524 "is_configured": true, 00:27:12.524 "data_offset": 2048, 00:27:12.524 "data_size": 63488 00:27:12.524 }, 00:27:12.524 { 00:27:12.524 "name": "pt2", 00:27:12.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:12.524 "is_configured": true, 00:27:12.524 "data_offset": 2048, 00:27:12.524 "data_size": 63488 00:27:12.524 }, 00:27:12.524 { 00:27:12.524 "name": "pt3", 00:27:12.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:12.524 "is_configured": true, 00:27:12.524 "data_offset": 2048, 00:27:12.524 "data_size": 63488 00:27:12.524 }, 00:27:12.524 { 00:27:12.524 "name": "pt4", 00:27:12.524 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:12.524 "is_configured": true, 00:27:12.524 "data_offset": 2048, 00:27:12.524 "data_size": 63488 00:27:12.524 } 00:27:12.524 ] 00:27:12.524 } 00:27:12.524 } 00:27:12.524 }' 00:27:12.524 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:12.782 pt2 00:27:12.782 pt3 00:27:12.782 pt4' 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.782 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.041 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:13.041 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:13.041 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:13.041 13:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:13.041 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.041 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.041 [2024-10-28 13:38:26.974479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:13.041 13:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aea7b47b-33aa-425e-9035-bc966d254e1f 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z aea7b47b-33aa-425e-9035-bc966d254e1f ']' 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.041 [2024-10-28 13:38:27.022104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:13.041 [2024-10-28 13:38:27.022162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:13.041 [2024-10-28 13:38:27.022289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:13.041 [2024-10-28 13:38:27.022385] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:13.041 [2024-10-28 13:38:27.022419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.041 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.041 [2024-10-28 13:38:27.190292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:13.041 [2024-10-28 13:38:27.193324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:13.041 [2024-10-28 13:38:27.193393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:13.041 [2024-10-28 13:38:27.193446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:27:13.041 [2024-10-28 13:38:27.193523] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:13.041 [2024-10-28 13:38:27.193655] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:13.041 [2024-10-28 13:38:27.193684] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:27:13.041 [2024-10-28 13:38:27.193712] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:27:13.041 [2024-10-28 13:38:27.193732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:13.041 [2024-10-28 13:38:27.193748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:27:13.041 request: 00:27:13.041 { 00:27:13.041 "name": "raid_bdev1", 00:27:13.041 "raid_level": "raid0", 00:27:13.041 "base_bdevs": [ 00:27:13.041 "malloc1", 00:27:13.041 "malloc2", 00:27:13.041 "malloc3", 00:27:13.041 "malloc4" 00:27:13.041 ], 00:27:13.041 "strip_size_kb": 64, 00:27:13.041 "superblock": false, 00:27:13.041 "method": "bdev_raid_create", 00:27:13.041 "req_id": 1 00:27:13.041 } 00:27:13.041 Got JSON-RPC error response 00:27:13.041 response: 00:27:13.041 { 00:27:13.041 "code": -17, 00:27:13.041 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:13.041 } 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.300 [2024-10-28 13:38:27.258307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:13.300 [2024-10-28 13:38:27.258386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:13.300 [2024-10-28 13:38:27.258415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:13.300 [2024-10-28 13:38:27.258435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:13.300 [2024-10-28 13:38:27.261813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:13.300 [2024-10-28 13:38:27.261855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:13.300 [2024-10-28 13:38:27.261966] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:13.300 [2024-10-28 13:38:27.262044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:13.300 pt1 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:13.300 "name": "raid_bdev1", 00:27:13.300 "uuid": "aea7b47b-33aa-425e-9035-bc966d254e1f", 00:27:13.300 "strip_size_kb": 64, 00:27:13.300 "state": "configuring", 00:27:13.300 "raid_level": "raid0", 00:27:13.300 "superblock": true, 00:27:13.300 "num_base_bdevs": 4, 00:27:13.300 "num_base_bdevs_discovered": 1, 00:27:13.300 "num_base_bdevs_operational": 4, 00:27:13.300 "base_bdevs_list": [ 00:27:13.300 { 00:27:13.300 "name": "pt1", 00:27:13.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:13.300 "is_configured": true, 00:27:13.300 "data_offset": 2048, 00:27:13.300 "data_size": 63488 00:27:13.300 }, 00:27:13.300 { 00:27:13.300 "name": null, 00:27:13.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:13.300 "is_configured": false, 00:27:13.300 "data_offset": 2048, 00:27:13.300 "data_size": 63488 00:27:13.300 }, 00:27:13.300 { 00:27:13.300 "name": null, 00:27:13.300 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:13.300 "is_configured": false, 00:27:13.300 "data_offset": 2048, 00:27:13.300 "data_size": 63488 00:27:13.300 }, 00:27:13.300 { 00:27:13.300 "name": null, 00:27:13.300 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:13.300 "is_configured": false, 00:27:13.300 "data_offset": 2048, 00:27:13.300 "data_size": 63488 00:27:13.300 } 00:27:13.300 ] 00:27:13.300 }' 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:13.300 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.866 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:27:13.866 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:13.866 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.866 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.867 [2024-10-28 13:38:27.814538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:13.867 [2024-10-28 13:38:27.814667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:13.867 [2024-10-28 13:38:27.814703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:27:13.867 [2024-10-28 13:38:27.814722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:13.867 [2024-10-28 13:38:27.815381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:13.867 [2024-10-28 13:38:27.815422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:13.867 [2024-10-28 13:38:27.815545] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:13.867 [2024-10-28 13:38:27.815586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:13.867 pt2 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.867 [2024-10-28 13:38:27.822452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:13.867 "name": "raid_bdev1", 00:27:13.867 "uuid": "aea7b47b-33aa-425e-9035-bc966d254e1f", 00:27:13.867 "strip_size_kb": 64, 00:27:13.867 "state": "configuring", 00:27:13.867 "raid_level": "raid0", 00:27:13.867 "superblock": true, 00:27:13.867 "num_base_bdevs": 4, 00:27:13.867 "num_base_bdevs_discovered": 1, 00:27:13.867 "num_base_bdevs_operational": 4, 00:27:13.867 "base_bdevs_list": [ 00:27:13.867 { 00:27:13.867 "name": "pt1", 00:27:13.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:13.867 "is_configured": true, 00:27:13.867 "data_offset": 2048, 00:27:13.867 "data_size": 63488 00:27:13.867 }, 00:27:13.867 { 00:27:13.867 "name": null, 00:27:13.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:13.867 "is_configured": false, 00:27:13.867 "data_offset": 0, 00:27:13.867 "data_size": 63488 00:27:13.867 }, 00:27:13.867 { 00:27:13.867 "name": null, 00:27:13.867 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:13.867 "is_configured": false, 00:27:13.867 "data_offset": 2048, 00:27:13.867 "data_size": 63488 00:27:13.867 }, 00:27:13.867 { 00:27:13.867 "name": null, 00:27:13.867 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:13.867 "is_configured": false, 00:27:13.867 "data_offset": 2048, 00:27:13.867 "data_size": 63488 00:27:13.867 } 00:27:13.867 ] 00:27:13.867 }' 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:13.867 13:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.432 [2024-10-28 13:38:28.374699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:14.432 [2024-10-28 13:38:28.374812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:14.432 [2024-10-28 13:38:28.374849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:14.432 [2024-10-28 13:38:28.374866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:14.432 [2024-10-28 13:38:28.375571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:14.432 [2024-10-28 13:38:28.375603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:14.432 [2024-10-28 13:38:28.375722] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:14.432 [2024-10-28 13:38:28.375758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:14.432 pt2 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.432 [2024-10-28 13:38:28.386647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:14.432 [2024-10-28 13:38:28.386721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:14.432 [2024-10-28 13:38:28.386755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:14.432 [2024-10-28 13:38:28.386771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:14.432 [2024-10-28 13:38:28.387355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:14.432 [2024-10-28 13:38:28.387388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:14.432 [2024-10-28 13:38:28.387485] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:14.432 [2024-10-28 13:38:28.387561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:14.432 pt3 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.432 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.432 [2024-10-28 13:38:28.398663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:14.432 [2024-10-28 13:38:28.398769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:14.432 [2024-10-28 13:38:28.398803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:14.433 [2024-10-28 13:38:28.398818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:14.433 [2024-10-28 13:38:28.399399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:14.433 [2024-10-28 13:38:28.399431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:14.433 [2024-10-28 13:38:28.399543] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:14.433 [2024-10-28 13:38:28.399577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:14.433 [2024-10-28 13:38:28.399741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:14.433 [2024-10-28 13:38:28.399757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:14.433 [2024-10-28 13:38:28.400064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:27:14.433 [2024-10-28 13:38:28.400307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:14.433 [2024-10-28 13:38:28.400337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:27:14.433 [2024-10-28 13:38:28.400478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:14.433 pt4 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:14.433 "name": "raid_bdev1", 00:27:14.433 "uuid": "aea7b47b-33aa-425e-9035-bc966d254e1f", 00:27:14.433 "strip_size_kb": 64, 00:27:14.433 "state": "online", 00:27:14.433 "raid_level": "raid0", 00:27:14.433 "superblock": true, 00:27:14.433 "num_base_bdevs": 4, 00:27:14.433 "num_base_bdevs_discovered": 4, 00:27:14.433 "num_base_bdevs_operational": 4, 00:27:14.433 "base_bdevs_list": [ 00:27:14.433 { 00:27:14.433 "name": "pt1", 00:27:14.433 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:14.433 "is_configured": true, 00:27:14.433 "data_offset": 2048, 00:27:14.433 "data_size": 63488 00:27:14.433 }, 00:27:14.433 { 00:27:14.433 "name": "pt2", 00:27:14.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:14.433 "is_configured": true, 00:27:14.433 "data_offset": 2048, 00:27:14.433 "data_size": 63488 00:27:14.433 }, 00:27:14.433 { 00:27:14.433 "name": "pt3", 00:27:14.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:14.433 "is_configured": true, 00:27:14.433 "data_offset": 2048, 00:27:14.433 "data_size": 63488 00:27:14.433 }, 00:27:14.433 { 00:27:14.433 "name": "pt4", 00:27:14.433 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:14.433 "is_configured": true, 00:27:14.433 "data_offset": 2048, 00:27:14.433 "data_size": 63488 00:27:14.433 } 00:27:14.433 ] 00:27:14.433 }' 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:14.433 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.998 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:14.998 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:14.998 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:14.998 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:14.998 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:14.998 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:14.998 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:14.998 13:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:14.998 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.998 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.998 [2024-10-28 13:38:28.971424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:14.998 13:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.998 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:14.998 "name": "raid_bdev1", 00:27:14.998 "aliases": [ 00:27:14.998 "aea7b47b-33aa-425e-9035-bc966d254e1f" 00:27:14.998 ], 00:27:14.998 "product_name": "Raid Volume", 00:27:14.998 "block_size": 512, 00:27:14.998 "num_blocks": 253952, 00:27:14.998 "uuid": "aea7b47b-33aa-425e-9035-bc966d254e1f", 00:27:14.998 "assigned_rate_limits": { 00:27:14.998 "rw_ios_per_sec": 0, 00:27:14.998 "rw_mbytes_per_sec": 0, 00:27:14.998 "r_mbytes_per_sec": 0, 00:27:14.998 "w_mbytes_per_sec": 0 00:27:14.998 }, 00:27:14.998 "claimed": false, 00:27:14.998 "zoned": false, 00:27:14.998 "supported_io_types": { 00:27:14.998 "read": true, 00:27:14.998 "write": true, 00:27:14.998 "unmap": true, 00:27:14.998 "flush": true, 00:27:14.999 "reset": true, 00:27:14.999 "nvme_admin": false, 00:27:14.999 "nvme_io": false, 00:27:14.999 "nvme_io_md": false, 00:27:14.999 "write_zeroes": true, 00:27:14.999 "zcopy": false, 00:27:14.999 "get_zone_info": false, 00:27:14.999 "zone_management": false, 00:27:14.999 "zone_append": false, 00:27:14.999 "compare": false, 00:27:14.999 "compare_and_write": false, 00:27:14.999 "abort": false, 00:27:14.999 "seek_hole": false, 00:27:14.999 "seek_data": false, 00:27:14.999 "copy": false, 00:27:14.999 "nvme_iov_md": false 00:27:14.999 }, 00:27:14.999 "memory_domains": [ 00:27:14.999 { 00:27:14.999 "dma_device_id": "system", 00:27:14.999 "dma_device_type": 1 00:27:14.999 }, 00:27:14.999 { 00:27:14.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.999 "dma_device_type": 2 00:27:14.999 }, 00:27:14.999 { 00:27:14.999 "dma_device_id": "system", 00:27:14.999 "dma_device_type": 1 00:27:14.999 }, 00:27:14.999 { 00:27:14.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.999 "dma_device_type": 2 00:27:14.999 }, 00:27:14.999 { 00:27:14.999 "dma_device_id": "system", 00:27:14.999 "dma_device_type": 1 00:27:14.999 }, 00:27:14.999 { 00:27:14.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.999 "dma_device_type": 2 00:27:14.999 }, 00:27:14.999 { 00:27:14.999 "dma_device_id": "system", 00:27:14.999 "dma_device_type": 1 00:27:14.999 }, 00:27:14.999 { 00:27:14.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.999 "dma_device_type": 2 00:27:14.999 } 00:27:14.999 ], 00:27:14.999 "driver_specific": { 00:27:14.999 "raid": { 00:27:14.999 "uuid": "aea7b47b-33aa-425e-9035-bc966d254e1f", 00:27:14.999 "strip_size_kb": 64, 00:27:14.999 "state": "online", 00:27:14.999 "raid_level": "raid0", 00:27:14.999 "superblock": true, 00:27:14.999 "num_base_bdevs": 4, 00:27:14.999 "num_base_bdevs_discovered": 4, 00:27:14.999 "num_base_bdevs_operational": 4, 00:27:14.999 "base_bdevs_list": [ 00:27:14.999 { 00:27:14.999 "name": "pt1", 00:27:14.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:14.999 "is_configured": true, 00:27:14.999 "data_offset": 2048, 00:27:14.999 "data_size": 63488 00:27:14.999 }, 00:27:14.999 { 00:27:14.999 "name": "pt2", 00:27:14.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:14.999 "is_configured": true, 00:27:14.999 "data_offset": 2048, 00:27:14.999 "data_size": 63488 00:27:14.999 }, 00:27:14.999 { 00:27:14.999 "name": "pt3", 00:27:14.999 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:14.999 "is_configured": true, 00:27:14.999 "data_offset": 2048, 00:27:14.999 "data_size": 63488 00:27:14.999 }, 00:27:14.999 { 00:27:14.999 "name": "pt4", 00:27:14.999 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:14.999 "is_configured": true, 00:27:14.999 "data_offset": 2048, 00:27:14.999 "data_size": 63488 00:27:14.999 } 00:27:14.999 ] 00:27:14.999 } 00:27:14.999 } 00:27:14.999 }' 00:27:14.999 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:14.999 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:14.999 pt2 00:27:14.999 pt3 00:27:14.999 pt4' 00:27:14.999 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:14.999 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:14.999 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:14.999 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:14.999 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:14.999 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.999 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.999 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:15.257 [2024-10-28 13:38:29.339507] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' aea7b47b-33aa-425e-9035-bc966d254e1f '!=' aea7b47b-33aa-425e-9035-bc966d254e1f ']' 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83494 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83494 ']' 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83494 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:15.257 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83494 00:27:15.514 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:15.514 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:15.514 killing process with pid 83494 00:27:15.514 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83494' 00:27:15.514 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83494 00:27:15.514 [2024-10-28 13:38:29.422871] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:15.514 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83494 00:27:15.514 [2024-10-28 13:38:29.423017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:15.514 [2024-10-28 13:38:29.423169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:15.514 [2024-10-28 13:38:29.423189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:15.514 [2024-10-28 13:38:29.483535] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:15.772 13:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:27:15.772 00:27:15.772 real 0m5.086s 00:27:15.772 user 0m8.269s 00:27:15.772 sys 0m0.911s 00:27:15.772 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:15.772 13:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.772 ************************************ 00:27:15.772 END TEST raid_superblock_test 00:27:15.772 ************************************ 00:27:15.772 13:38:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:27:15.772 13:38:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:15.772 13:38:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:15.772 13:38:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:15.772 ************************************ 00:27:15.772 START TEST raid_read_error_test 00:27:15.772 ************************************ 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ucfa5g8eT8 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83753 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83753 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83753 ']' 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:15.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:15.772 13:38:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.030 [2024-10-28 13:38:30.001943] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:27:16.030 [2024-10-28 13:38:30.002194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83753 ] 00:27:16.030 [2024-10-28 13:38:30.158870] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:16.288 [2024-10-28 13:38:30.188750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.288 [2024-10-28 13:38:30.265355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.288 [2024-10-28 13:38:30.349385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:16.289 [2024-10-28 13:38:30.349468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.222 BaseBdev1_malloc 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.222 true 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.222 [2024-10-28 13:38:31.088905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:17.222 [2024-10-28 13:38:31.088996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.222 [2024-10-28 13:38:31.089021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:17.222 [2024-10-28 13:38:31.089042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.222 [2024-10-28 13:38:31.092631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.222 [2024-10-28 13:38:31.092688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:17.222 BaseBdev1 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.222 BaseBdev2_malloc 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.222 true 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.222 [2024-10-28 13:38:31.126036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:17.222 [2024-10-28 13:38:31.126139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.222 [2024-10-28 13:38:31.126180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:17.222 [2024-10-28 13:38:31.126200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.222 [2024-10-28 13:38:31.129440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.222 [2024-10-28 13:38:31.129486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:17.222 BaseBdev2 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.222 BaseBdev3_malloc 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.222 true 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.222 [2024-10-28 13:38:31.163145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:17.222 [2024-10-28 13:38:31.163266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.222 [2024-10-28 13:38:31.163294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:17.222 [2024-10-28 13:38:31.163313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.222 [2024-10-28 13:38:31.166530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.222 [2024-10-28 13:38:31.166603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:17.222 BaseBdev3 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.222 BaseBdev4_malloc 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.222 true 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.222 [2024-10-28 13:38:31.212662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:17.222 [2024-10-28 13:38:31.212752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.222 [2024-10-28 13:38:31.212779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:17.222 [2024-10-28 13:38:31.212796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.222 [2024-10-28 13:38:31.216161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.222 [2024-10-28 13:38:31.216332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:17.222 BaseBdev4 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.222 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.223 [2024-10-28 13:38:31.220707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:17.223 [2024-10-28 13:38:31.223628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:17.223 [2024-10-28 13:38:31.223740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:17.223 [2024-10-28 13:38:31.223839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:17.223 [2024-10-28 13:38:31.224226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:17.223 [2024-10-28 13:38:31.224260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:17.223 [2024-10-28 13:38:31.224617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:27:17.223 [2024-10-28 13:38:31.224879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:17.223 [2024-10-28 13:38:31.224905] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:17.223 [2024-10-28 13:38:31.225121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:17.223 "name": "raid_bdev1", 00:27:17.223 "uuid": "7cb129ad-93c0-4c3b-ada5-4f2708841b06", 00:27:17.223 "strip_size_kb": 64, 00:27:17.223 "state": "online", 00:27:17.223 "raid_level": "raid0", 00:27:17.223 "superblock": true, 00:27:17.223 "num_base_bdevs": 4, 00:27:17.223 "num_base_bdevs_discovered": 4, 00:27:17.223 "num_base_bdevs_operational": 4, 00:27:17.223 "base_bdevs_list": [ 00:27:17.223 { 00:27:17.223 "name": "BaseBdev1", 00:27:17.223 "uuid": "2a543940-05b4-5e3c-a2f2-379584ae154c", 00:27:17.223 "is_configured": true, 00:27:17.223 "data_offset": 2048, 00:27:17.223 "data_size": 63488 00:27:17.223 }, 00:27:17.223 { 00:27:17.223 "name": "BaseBdev2", 00:27:17.223 "uuid": "624f5db4-4087-5d0e-ae45-cdf5fa8b9435", 00:27:17.223 "is_configured": true, 00:27:17.223 "data_offset": 2048, 00:27:17.223 "data_size": 63488 00:27:17.223 }, 00:27:17.223 { 00:27:17.223 "name": "BaseBdev3", 00:27:17.223 "uuid": "5f628630-8788-56c9-8c7d-5ea7a7ed6c6b", 00:27:17.223 "is_configured": true, 00:27:17.223 "data_offset": 2048, 00:27:17.223 "data_size": 63488 00:27:17.223 }, 00:27:17.223 { 00:27:17.223 "name": "BaseBdev4", 00:27:17.223 "uuid": "9fca887e-ee07-551a-8803-56791e33bfc1", 00:27:17.223 "is_configured": true, 00:27:17.223 "data_offset": 2048, 00:27:17.223 "data_size": 63488 00:27:17.223 } 00:27:17.223 ] 00:27:17.223 }' 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:17.223 13:38:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.789 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:17.789 13:38:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:17.789 [2024-10-28 13:38:31.850136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:18.722 "name": "raid_bdev1", 00:27:18.722 "uuid": "7cb129ad-93c0-4c3b-ada5-4f2708841b06", 00:27:18.722 "strip_size_kb": 64, 00:27:18.722 "state": "online", 00:27:18.722 "raid_level": "raid0", 00:27:18.722 "superblock": true, 00:27:18.722 "num_base_bdevs": 4, 00:27:18.722 "num_base_bdevs_discovered": 4, 00:27:18.722 "num_base_bdevs_operational": 4, 00:27:18.722 "base_bdevs_list": [ 00:27:18.722 { 00:27:18.722 "name": "BaseBdev1", 00:27:18.722 "uuid": "2a543940-05b4-5e3c-a2f2-379584ae154c", 00:27:18.722 "is_configured": true, 00:27:18.722 "data_offset": 2048, 00:27:18.722 "data_size": 63488 00:27:18.722 }, 00:27:18.722 { 00:27:18.722 "name": "BaseBdev2", 00:27:18.722 "uuid": "624f5db4-4087-5d0e-ae45-cdf5fa8b9435", 00:27:18.722 "is_configured": true, 00:27:18.722 "data_offset": 2048, 00:27:18.722 "data_size": 63488 00:27:18.722 }, 00:27:18.722 { 00:27:18.722 "name": "BaseBdev3", 00:27:18.722 "uuid": "5f628630-8788-56c9-8c7d-5ea7a7ed6c6b", 00:27:18.722 "is_configured": true, 00:27:18.722 "data_offset": 2048, 00:27:18.722 "data_size": 63488 00:27:18.722 }, 00:27:18.722 { 00:27:18.722 "name": "BaseBdev4", 00:27:18.722 "uuid": "9fca887e-ee07-551a-8803-56791e33bfc1", 00:27:18.722 "is_configured": true, 00:27:18.722 "data_offset": 2048, 00:27:18.722 "data_size": 63488 00:27:18.722 } 00:27:18.722 ] 00:27:18.722 }' 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:18.722 13:38:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.288 [2024-10-28 13:38:33.330836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:19.288 [2024-10-28 13:38:33.330924] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:19.288 [2024-10-28 13:38:33.334326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:19.288 [2024-10-28 13:38:33.334422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:19.288 [2024-10-28 13:38:33.334498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:19.288 [2024-10-28 13:38:33.334520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:19.288 { 00:27:19.288 "results": [ 00:27:19.288 { 00:27:19.288 "job": "raid_bdev1", 00:27:19.288 "core_mask": "0x1", 00:27:19.288 "workload": "randrw", 00:27:19.288 "percentage": 50, 00:27:19.288 "status": "finished", 00:27:19.288 "queue_depth": 1, 00:27:19.288 "io_size": 131072, 00:27:19.288 "runtime": 1.477509, 00:27:19.288 "iops": 9416.524704756452, 00:27:19.288 "mibps": 1177.0655880945565, 00:27:19.288 "io_failed": 1, 00:27:19.288 "io_timeout": 0, 00:27:19.288 "avg_latency_us": 149.50185738366852, 00:27:19.288 "min_latency_us": 37.93454545454546, 00:27:19.288 "max_latency_us": 1869.2654545454545 00:27:19.288 } 00:27:19.288 ], 00:27:19.288 "core_count": 1 00:27:19.288 } 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83753 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83753 ']' 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83753 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83753 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:19.288 killing process with pid 83753 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83753' 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83753 00:27:19.288 [2024-10-28 13:38:33.371125] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:19.288 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83753 00:27:19.288 [2024-10-28 13:38:33.437594] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:19.854 13:38:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:27:19.854 13:38:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ucfa5g8eT8 00:27:19.854 13:38:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:27:19.854 ************************************ 00:27:19.854 END TEST raid_read_error_test 00:27:19.854 ************************************ 00:27:19.854 13:38:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:27:19.854 13:38:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:27:19.854 13:38:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:19.854 13:38:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:19.854 13:38:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:27:19.854 00:27:19.854 real 0m3.896s 00:27:19.854 user 0m5.097s 00:27:19.854 sys 0m0.629s 00:27:19.854 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:19.854 13:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.854 13:38:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:27:19.854 13:38:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:19.854 13:38:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:19.854 13:38:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:19.854 ************************************ 00:27:19.854 START TEST raid_write_error_test 00:27:19.854 ************************************ 00:27:19.854 13:38:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:27:19.854 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:27:19.854 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:27:19.854 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:27:19.854 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:19.854 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:19.854 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:19.854 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:19.854 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nA0yKIUXo0 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83893 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83893 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 83893 ']' 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:19.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:19.855 13:38:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.855 [2024-10-28 13:38:33.952568] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:27:19.855 [2024-10-28 13:38:33.952755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83893 ] 00:27:20.112 [2024-10-28 13:38:34.108811] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:20.112 [2024-10-28 13:38:34.137017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.112 [2024-10-28 13:38:34.211545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.370 [2024-10-28 13:38:34.292744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:20.370 [2024-10-28 13:38:34.292864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:20.936 13:38:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:20.936 13:38:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:27:20.936 13:38:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:20.936 13:38:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:20.936 13:38:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.936 13:38:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.936 BaseBdev1_malloc 00:27:20.936 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.936 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:20.936 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.936 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.936 true 00:27:20.936 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.936 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:20.936 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.936 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.936 [2024-10-28 13:38:35.032915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:20.936 [2024-10-28 13:38:35.033019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.936 [2024-10-28 13:38:35.033055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:20.936 [2024-10-28 13:38:35.033079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.936 [2024-10-28 13:38:35.036532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.937 [2024-10-28 13:38:35.036584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:20.937 BaseBdev1 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.937 BaseBdev2_malloc 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.937 true 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.937 [2024-10-28 13:38:35.082445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:20.937 [2024-10-28 13:38:35.082530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.937 [2024-10-28 13:38:35.082570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:20.937 [2024-10-28 13:38:35.082590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.937 [2024-10-28 13:38:35.085852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.937 [2024-10-28 13:38:35.085906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:20.937 BaseBdev2 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.937 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.196 BaseBdev3_malloc 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.196 true 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.196 [2024-10-28 13:38:35.131112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:21.196 [2024-10-28 13:38:35.131216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.196 [2024-10-28 13:38:35.131248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:21.196 [2024-10-28 13:38:35.131273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.196 [2024-10-28 13:38:35.134381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.196 [2024-10-28 13:38:35.134444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:21.196 BaseBdev3 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.196 BaseBdev4_malloc 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.196 true 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.196 [2024-10-28 13:38:35.189858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:21.196 [2024-10-28 13:38:35.189958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.196 [2024-10-28 13:38:35.189995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:21.196 [2024-10-28 13:38:35.190027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.196 [2024-10-28 13:38:35.193370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.196 [2024-10-28 13:38:35.193447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:21.196 BaseBdev4 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.196 [2024-10-28 13:38:35.201895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:21.196 [2024-10-28 13:38:35.204785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:21.196 [2024-10-28 13:38:35.204938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:21.196 [2024-10-28 13:38:35.205053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:21.196 [2024-10-28 13:38:35.205384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:21.196 [2024-10-28 13:38:35.205419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:21.196 [2024-10-28 13:38:35.205823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:27:21.196 [2024-10-28 13:38:35.206046] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:21.196 [2024-10-28 13:38:35.206069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:21.196 [2024-10-28 13:38:35.206368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.196 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.197 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.197 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.197 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:21.197 "name": "raid_bdev1", 00:27:21.197 "uuid": "a3faf7b0-1e60-439c-9e7d-efab530ccd25", 00:27:21.197 "strip_size_kb": 64, 00:27:21.197 "state": "online", 00:27:21.197 "raid_level": "raid0", 00:27:21.197 "superblock": true, 00:27:21.197 "num_base_bdevs": 4, 00:27:21.197 "num_base_bdevs_discovered": 4, 00:27:21.197 "num_base_bdevs_operational": 4, 00:27:21.197 "base_bdevs_list": [ 00:27:21.197 { 00:27:21.197 "name": "BaseBdev1", 00:27:21.197 "uuid": "4cd3ec5f-2152-5a5c-adf6-4de3046457f5", 00:27:21.197 "is_configured": true, 00:27:21.197 "data_offset": 2048, 00:27:21.197 "data_size": 63488 00:27:21.197 }, 00:27:21.197 { 00:27:21.197 "name": "BaseBdev2", 00:27:21.197 "uuid": "0492446c-a85d-569d-be6d-e3c4051d97ad", 00:27:21.197 "is_configured": true, 00:27:21.197 "data_offset": 2048, 00:27:21.197 "data_size": 63488 00:27:21.197 }, 00:27:21.197 { 00:27:21.197 "name": "BaseBdev3", 00:27:21.197 "uuid": "3a4f0ff1-6297-534d-b818-f2ce4a7c128e", 00:27:21.197 "is_configured": true, 00:27:21.197 "data_offset": 2048, 00:27:21.197 "data_size": 63488 00:27:21.197 }, 00:27:21.197 { 00:27:21.197 "name": "BaseBdev4", 00:27:21.197 "uuid": "e0a65a78-2c75-5c8d-b295-010218442988", 00:27:21.197 "is_configured": true, 00:27:21.197 "data_offset": 2048, 00:27:21.197 "data_size": 63488 00:27:21.197 } 00:27:21.197 ] 00:27:21.197 }' 00:27:21.197 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:21.197 13:38:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.763 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:21.763 13:38:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:21.763 [2024-10-28 13:38:35.847252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.696 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:22.696 "name": "raid_bdev1", 00:27:22.697 "uuid": "a3faf7b0-1e60-439c-9e7d-efab530ccd25", 00:27:22.697 "strip_size_kb": 64, 00:27:22.697 "state": "online", 00:27:22.697 "raid_level": "raid0", 00:27:22.697 "superblock": true, 00:27:22.697 "num_base_bdevs": 4, 00:27:22.697 "num_base_bdevs_discovered": 4, 00:27:22.697 "num_base_bdevs_operational": 4, 00:27:22.697 "base_bdevs_list": [ 00:27:22.697 { 00:27:22.697 "name": "BaseBdev1", 00:27:22.697 "uuid": "4cd3ec5f-2152-5a5c-adf6-4de3046457f5", 00:27:22.697 "is_configured": true, 00:27:22.697 "data_offset": 2048, 00:27:22.697 "data_size": 63488 00:27:22.697 }, 00:27:22.697 { 00:27:22.697 "name": "BaseBdev2", 00:27:22.697 "uuid": "0492446c-a85d-569d-be6d-e3c4051d97ad", 00:27:22.697 "is_configured": true, 00:27:22.697 "data_offset": 2048, 00:27:22.697 "data_size": 63488 00:27:22.697 }, 00:27:22.697 { 00:27:22.697 "name": "BaseBdev3", 00:27:22.697 "uuid": "3a4f0ff1-6297-534d-b818-f2ce4a7c128e", 00:27:22.697 "is_configured": true, 00:27:22.697 "data_offset": 2048, 00:27:22.697 "data_size": 63488 00:27:22.697 }, 00:27:22.697 { 00:27:22.697 "name": "BaseBdev4", 00:27:22.697 "uuid": "e0a65a78-2c75-5c8d-b295-010218442988", 00:27:22.697 "is_configured": true, 00:27:22.697 "data_offset": 2048, 00:27:22.697 "data_size": 63488 00:27:22.697 } 00:27:22.697 ] 00:27:22.697 }' 00:27:22.697 13:38:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:22.697 13:38:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:23.263 [2024-10-28 13:38:37.317708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:23.263 [2024-10-28 13:38:37.317762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:23.263 [2024-10-28 13:38:37.321117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:23.263 [2024-10-28 13:38:37.321244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:23.263 [2024-10-28 13:38:37.321307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:23.263 [2024-10-28 13:38:37.321338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:23.263 { 00:27:23.263 "results": [ 00:27:23.263 { 00:27:23.263 "job": "raid_bdev1", 00:27:23.263 "core_mask": "0x1", 00:27:23.263 "workload": "randrw", 00:27:23.263 "percentage": 50, 00:27:23.263 "status": "finished", 00:27:23.263 "queue_depth": 1, 00:27:23.263 "io_size": 131072, 00:27:23.263 "runtime": 1.467455, 00:27:23.263 "iops": 10060.274420680702, 00:27:23.263 "mibps": 1257.5343025850877, 00:27:23.263 "io_failed": 1, 00:27:23.263 "io_timeout": 0, 00:27:23.263 "avg_latency_us": 140.47267013127754, 00:27:23.263 "min_latency_us": 37.70181818181818, 00:27:23.263 "max_latency_us": 1705.4254545454546 00:27:23.263 } 00:27:23.263 ], 00:27:23.263 "core_count": 1 00:27:23.263 } 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83893 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 83893 ']' 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 83893 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83893 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:23.263 killing process with pid 83893 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83893' 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 83893 00:27:23.263 [2024-10-28 13:38:37.358334] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:23.263 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 83893 00:27:23.263 [2024-10-28 13:38:37.406612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:23.831 13:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nA0yKIUXo0 00:27:23.831 13:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:27:23.831 13:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:27:23.831 13:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:27:23.831 13:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:27:23.831 13:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:23.831 13:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:23.831 13:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:27:23.831 00:27:23.831 real 0m3.898s 00:27:23.831 user 0m5.116s 00:27:23.831 sys 0m0.625s 00:27:23.831 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:23.831 13:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:23.831 ************************************ 00:27:23.831 END TEST raid_write_error_test 00:27:23.831 ************************************ 00:27:23.831 13:38:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:27:23.831 13:38:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:27:23.831 13:38:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:23.831 13:38:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:23.831 13:38:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:23.831 ************************************ 00:27:23.831 START TEST raid_state_function_test 00:27:23.831 ************************************ 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84020 00:27:23.831 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:23.831 Process raid pid: 84020 00:27:23.832 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84020' 00:27:23.832 13:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84020 00:27:23.832 13:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 84020 ']' 00:27:23.832 13:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.832 13:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:23.832 13:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.832 13:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:23.832 13:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:23.832 [2024-10-28 13:38:37.892423] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:27:23.832 [2024-10-28 13:38:37.892627] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.090 [2024-10-28 13:38:38.038104] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:24.090 [2024-10-28 13:38:38.062284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.090 [2024-10-28 13:38:38.127100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.090 [2024-10-28 13:38:38.209383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:24.090 [2024-10-28 13:38:38.209500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.025 [2024-10-28 13:38:38.899553] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:25.025 [2024-10-28 13:38:38.899622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:25.025 [2024-10-28 13:38:38.899642] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:25.025 [2024-10-28 13:38:38.899656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:25.025 [2024-10-28 13:38:38.899672] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:25.025 [2024-10-28 13:38:38.899683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:25.025 [2024-10-28 13:38:38.899695] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:25.025 [2024-10-28 13:38:38.899705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:25.025 "name": "Existed_Raid", 00:27:25.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.025 "strip_size_kb": 64, 00:27:25.025 "state": "configuring", 00:27:25.025 "raid_level": "concat", 00:27:25.025 "superblock": false, 00:27:25.025 "num_base_bdevs": 4, 00:27:25.025 "num_base_bdevs_discovered": 0, 00:27:25.025 "num_base_bdevs_operational": 4, 00:27:25.025 "base_bdevs_list": [ 00:27:25.025 { 00:27:25.025 "name": "BaseBdev1", 00:27:25.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.025 "is_configured": false, 00:27:25.025 "data_offset": 0, 00:27:25.025 "data_size": 0 00:27:25.025 }, 00:27:25.025 { 00:27:25.025 "name": "BaseBdev2", 00:27:25.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.025 "is_configured": false, 00:27:25.025 "data_offset": 0, 00:27:25.025 "data_size": 0 00:27:25.025 }, 00:27:25.025 { 00:27:25.025 "name": "BaseBdev3", 00:27:25.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.025 "is_configured": false, 00:27:25.025 "data_offset": 0, 00:27:25.025 "data_size": 0 00:27:25.025 }, 00:27:25.025 { 00:27:25.025 "name": "BaseBdev4", 00:27:25.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.025 "is_configured": false, 00:27:25.025 "data_offset": 0, 00:27:25.025 "data_size": 0 00:27:25.025 } 00:27:25.025 ] 00:27:25.025 }' 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:25.025 13:38:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.283 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:25.283 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.283 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.283 [2024-10-28 13:38:39.371572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:25.283 [2024-10-28 13:38:39.371614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:27:25.283 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.283 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:25.283 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.283 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.283 [2024-10-28 13:38:39.379550] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:25.283 [2024-10-28 13:38:39.379595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:25.283 [2024-10-28 13:38:39.379612] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:25.283 [2024-10-28 13:38:39.379625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:25.284 [2024-10-28 13:38:39.379637] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:25.284 [2024-10-28 13:38:39.379649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:25.284 [2024-10-28 13:38:39.379661] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:25.284 [2024-10-28 13:38:39.379671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.284 [2024-10-28 13:38:39.404032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:25.284 BaseBdev1 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.284 [ 00:27:25.284 { 00:27:25.284 "name": "BaseBdev1", 00:27:25.284 "aliases": [ 00:27:25.284 "ecd68bb5-5fc0-4ffd-875a-6e78c4d2cbbd" 00:27:25.284 ], 00:27:25.284 "product_name": "Malloc disk", 00:27:25.284 "block_size": 512, 00:27:25.284 "num_blocks": 65536, 00:27:25.284 "uuid": "ecd68bb5-5fc0-4ffd-875a-6e78c4d2cbbd", 00:27:25.284 "assigned_rate_limits": { 00:27:25.284 "rw_ios_per_sec": 0, 00:27:25.284 "rw_mbytes_per_sec": 0, 00:27:25.284 "r_mbytes_per_sec": 0, 00:27:25.284 "w_mbytes_per_sec": 0 00:27:25.284 }, 00:27:25.284 "claimed": true, 00:27:25.284 "claim_type": "exclusive_write", 00:27:25.284 "zoned": false, 00:27:25.284 "supported_io_types": { 00:27:25.284 "read": true, 00:27:25.284 "write": true, 00:27:25.284 "unmap": true, 00:27:25.284 "flush": true, 00:27:25.284 "reset": true, 00:27:25.284 "nvme_admin": false, 00:27:25.284 "nvme_io": false, 00:27:25.284 "nvme_io_md": false, 00:27:25.284 "write_zeroes": true, 00:27:25.284 "zcopy": true, 00:27:25.284 "get_zone_info": false, 00:27:25.284 "zone_management": false, 00:27:25.284 "zone_append": false, 00:27:25.284 "compare": false, 00:27:25.284 "compare_and_write": false, 00:27:25.284 "abort": true, 00:27:25.284 "seek_hole": false, 00:27:25.284 "seek_data": false, 00:27:25.284 "copy": true, 00:27:25.284 "nvme_iov_md": false 00:27:25.284 }, 00:27:25.284 "memory_domains": [ 00:27:25.284 { 00:27:25.284 "dma_device_id": "system", 00:27:25.284 "dma_device_type": 1 00:27:25.284 }, 00:27:25.284 { 00:27:25.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.284 "dma_device_type": 2 00:27:25.284 } 00:27:25.284 ], 00:27:25.284 "driver_specific": {} 00:27:25.284 } 00:27:25.284 ] 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.284 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.550 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.550 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:25.550 "name": "Existed_Raid", 00:27:25.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.550 "strip_size_kb": 64, 00:27:25.550 "state": "configuring", 00:27:25.550 "raid_level": "concat", 00:27:25.550 "superblock": false, 00:27:25.550 "num_base_bdevs": 4, 00:27:25.550 "num_base_bdevs_discovered": 1, 00:27:25.550 "num_base_bdevs_operational": 4, 00:27:25.550 "base_bdevs_list": [ 00:27:25.550 { 00:27:25.550 "name": "BaseBdev1", 00:27:25.550 "uuid": "ecd68bb5-5fc0-4ffd-875a-6e78c4d2cbbd", 00:27:25.550 "is_configured": true, 00:27:25.550 "data_offset": 0, 00:27:25.550 "data_size": 65536 00:27:25.550 }, 00:27:25.550 { 00:27:25.550 "name": "BaseBdev2", 00:27:25.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.550 "is_configured": false, 00:27:25.550 "data_offset": 0, 00:27:25.550 "data_size": 0 00:27:25.550 }, 00:27:25.550 { 00:27:25.550 "name": "BaseBdev3", 00:27:25.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.550 "is_configured": false, 00:27:25.550 "data_offset": 0, 00:27:25.550 "data_size": 0 00:27:25.550 }, 00:27:25.550 { 00:27:25.550 "name": "BaseBdev4", 00:27:25.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.550 "is_configured": false, 00:27:25.550 "data_offset": 0, 00:27:25.550 "data_size": 0 00:27:25.550 } 00:27:25.550 ] 00:27:25.550 }' 00:27:25.550 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:25.550 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.809 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:25.809 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.809 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.809 [2024-10-28 13:38:39.960273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:25.809 [2024-10-28 13:38:39.960364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:25.809 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.809 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:25.809 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.809 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.066 [2024-10-28 13:38:39.968259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:26.066 [2024-10-28 13:38:39.971053] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:26.067 [2024-10-28 13:38:39.971116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:26.067 [2024-10-28 13:38:39.971193] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:26.067 [2024-10-28 13:38:39.971209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:26.067 [2024-10-28 13:38:39.971221] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:26.067 [2024-10-28 13:38:39.971233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.067 13:38:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.067 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:26.067 "name": "Existed_Raid", 00:27:26.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.067 "strip_size_kb": 64, 00:27:26.067 "state": "configuring", 00:27:26.067 "raid_level": "concat", 00:27:26.067 "superblock": false, 00:27:26.067 "num_base_bdevs": 4, 00:27:26.067 "num_base_bdevs_discovered": 1, 00:27:26.067 "num_base_bdevs_operational": 4, 00:27:26.067 "base_bdevs_list": [ 00:27:26.067 { 00:27:26.067 "name": "BaseBdev1", 00:27:26.067 "uuid": "ecd68bb5-5fc0-4ffd-875a-6e78c4d2cbbd", 00:27:26.067 "is_configured": true, 00:27:26.067 "data_offset": 0, 00:27:26.067 "data_size": 65536 00:27:26.067 }, 00:27:26.067 { 00:27:26.067 "name": "BaseBdev2", 00:27:26.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.067 "is_configured": false, 00:27:26.067 "data_offset": 0, 00:27:26.067 "data_size": 0 00:27:26.067 }, 00:27:26.067 { 00:27:26.067 "name": "BaseBdev3", 00:27:26.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.067 "is_configured": false, 00:27:26.067 "data_offset": 0, 00:27:26.067 "data_size": 0 00:27:26.067 }, 00:27:26.067 { 00:27:26.067 "name": "BaseBdev4", 00:27:26.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.067 "is_configured": false, 00:27:26.067 "data_offset": 0, 00:27:26.067 "data_size": 0 00:27:26.067 } 00:27:26.067 ] 00:27:26.067 }' 00:27:26.067 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:26.067 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.635 [2024-10-28 13:38:40.570360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:26.635 BaseBdev2 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.635 [ 00:27:26.635 { 00:27:26.635 "name": "BaseBdev2", 00:27:26.635 "aliases": [ 00:27:26.635 "b81848e2-f483-45c5-b9df-e019ec0d3ea3" 00:27:26.635 ], 00:27:26.635 "product_name": "Malloc disk", 00:27:26.635 "block_size": 512, 00:27:26.635 "num_blocks": 65536, 00:27:26.635 "uuid": "b81848e2-f483-45c5-b9df-e019ec0d3ea3", 00:27:26.635 "assigned_rate_limits": { 00:27:26.635 "rw_ios_per_sec": 0, 00:27:26.635 "rw_mbytes_per_sec": 0, 00:27:26.635 "r_mbytes_per_sec": 0, 00:27:26.635 "w_mbytes_per_sec": 0 00:27:26.635 }, 00:27:26.635 "claimed": true, 00:27:26.635 "claim_type": "exclusive_write", 00:27:26.635 "zoned": false, 00:27:26.635 "supported_io_types": { 00:27:26.635 "read": true, 00:27:26.635 "write": true, 00:27:26.635 "unmap": true, 00:27:26.635 "flush": true, 00:27:26.635 "reset": true, 00:27:26.635 "nvme_admin": false, 00:27:26.635 "nvme_io": false, 00:27:26.635 "nvme_io_md": false, 00:27:26.635 "write_zeroes": true, 00:27:26.635 "zcopy": true, 00:27:26.635 "get_zone_info": false, 00:27:26.635 "zone_management": false, 00:27:26.635 "zone_append": false, 00:27:26.635 "compare": false, 00:27:26.635 "compare_and_write": false, 00:27:26.635 "abort": true, 00:27:26.635 "seek_hole": false, 00:27:26.635 "seek_data": false, 00:27:26.635 "copy": true, 00:27:26.635 "nvme_iov_md": false 00:27:26.635 }, 00:27:26.635 "memory_domains": [ 00:27:26.635 { 00:27:26.635 "dma_device_id": "system", 00:27:26.635 "dma_device_type": 1 00:27:26.635 }, 00:27:26.635 { 00:27:26.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:26.635 "dma_device_type": 2 00:27:26.635 } 00:27:26.635 ], 00:27:26.635 "driver_specific": {} 00:27:26.635 } 00:27:26.635 ] 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.635 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:26.635 "name": "Existed_Raid", 00:27:26.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.635 "strip_size_kb": 64, 00:27:26.635 "state": "configuring", 00:27:26.635 "raid_level": "concat", 00:27:26.635 "superblock": false, 00:27:26.635 "num_base_bdevs": 4, 00:27:26.635 "num_base_bdevs_discovered": 2, 00:27:26.635 "num_base_bdevs_operational": 4, 00:27:26.635 "base_bdevs_list": [ 00:27:26.635 { 00:27:26.635 "name": "BaseBdev1", 00:27:26.635 "uuid": "ecd68bb5-5fc0-4ffd-875a-6e78c4d2cbbd", 00:27:26.636 "is_configured": true, 00:27:26.636 "data_offset": 0, 00:27:26.636 "data_size": 65536 00:27:26.636 }, 00:27:26.636 { 00:27:26.636 "name": "BaseBdev2", 00:27:26.636 "uuid": "b81848e2-f483-45c5-b9df-e019ec0d3ea3", 00:27:26.636 "is_configured": true, 00:27:26.636 "data_offset": 0, 00:27:26.636 "data_size": 65536 00:27:26.636 }, 00:27:26.636 { 00:27:26.636 "name": "BaseBdev3", 00:27:26.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.636 "is_configured": false, 00:27:26.636 "data_offset": 0, 00:27:26.636 "data_size": 0 00:27:26.636 }, 00:27:26.636 { 00:27:26.636 "name": "BaseBdev4", 00:27:26.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.636 "is_configured": false, 00:27:26.636 "data_offset": 0, 00:27:26.636 "data_size": 0 00:27:26.636 } 00:27:26.636 ] 00:27:26.636 }' 00:27:26.636 13:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:26.636 13:38:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.201 [2024-10-28 13:38:41.172043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:27.201 BaseBdev3 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.201 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.201 [ 00:27:27.201 { 00:27:27.201 "name": "BaseBdev3", 00:27:27.201 "aliases": [ 00:27:27.201 "7fd2a37e-1b95-483d-8ba8-44a526d4b959" 00:27:27.201 ], 00:27:27.201 "product_name": "Malloc disk", 00:27:27.201 "block_size": 512, 00:27:27.201 "num_blocks": 65536, 00:27:27.201 "uuid": "7fd2a37e-1b95-483d-8ba8-44a526d4b959", 00:27:27.201 "assigned_rate_limits": { 00:27:27.201 "rw_ios_per_sec": 0, 00:27:27.201 "rw_mbytes_per_sec": 0, 00:27:27.201 "r_mbytes_per_sec": 0, 00:27:27.201 "w_mbytes_per_sec": 0 00:27:27.201 }, 00:27:27.201 "claimed": true, 00:27:27.201 "claim_type": "exclusive_write", 00:27:27.201 "zoned": false, 00:27:27.201 "supported_io_types": { 00:27:27.201 "read": true, 00:27:27.201 "write": true, 00:27:27.201 "unmap": true, 00:27:27.201 "flush": true, 00:27:27.201 "reset": true, 00:27:27.201 "nvme_admin": false, 00:27:27.201 "nvme_io": false, 00:27:27.201 "nvme_io_md": false, 00:27:27.201 "write_zeroes": true, 00:27:27.201 "zcopy": true, 00:27:27.201 "get_zone_info": false, 00:27:27.201 "zone_management": false, 00:27:27.201 "zone_append": false, 00:27:27.201 "compare": false, 00:27:27.201 "compare_and_write": false, 00:27:27.201 "abort": true, 00:27:27.201 "seek_hole": false, 00:27:27.201 "seek_data": false, 00:27:27.202 "copy": true, 00:27:27.202 "nvme_iov_md": false 00:27:27.202 }, 00:27:27.202 "memory_domains": [ 00:27:27.202 { 00:27:27.202 "dma_device_id": "system", 00:27:27.202 "dma_device_type": 1 00:27:27.202 }, 00:27:27.202 { 00:27:27.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:27.202 "dma_device_type": 2 00:27:27.202 } 00:27:27.202 ], 00:27:27.202 "driver_specific": {} 00:27:27.202 } 00:27:27.202 ] 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:27.202 "name": "Existed_Raid", 00:27:27.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.202 "strip_size_kb": 64, 00:27:27.202 "state": "configuring", 00:27:27.202 "raid_level": "concat", 00:27:27.202 "superblock": false, 00:27:27.202 "num_base_bdevs": 4, 00:27:27.202 "num_base_bdevs_discovered": 3, 00:27:27.202 "num_base_bdevs_operational": 4, 00:27:27.202 "base_bdevs_list": [ 00:27:27.202 { 00:27:27.202 "name": "BaseBdev1", 00:27:27.202 "uuid": "ecd68bb5-5fc0-4ffd-875a-6e78c4d2cbbd", 00:27:27.202 "is_configured": true, 00:27:27.202 "data_offset": 0, 00:27:27.202 "data_size": 65536 00:27:27.202 }, 00:27:27.202 { 00:27:27.202 "name": "BaseBdev2", 00:27:27.202 "uuid": "b81848e2-f483-45c5-b9df-e019ec0d3ea3", 00:27:27.202 "is_configured": true, 00:27:27.202 "data_offset": 0, 00:27:27.202 "data_size": 65536 00:27:27.202 }, 00:27:27.202 { 00:27:27.202 "name": "BaseBdev3", 00:27:27.202 "uuid": "7fd2a37e-1b95-483d-8ba8-44a526d4b959", 00:27:27.202 "is_configured": true, 00:27:27.202 "data_offset": 0, 00:27:27.202 "data_size": 65536 00:27:27.202 }, 00:27:27.202 { 00:27:27.202 "name": "BaseBdev4", 00:27:27.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.202 "is_configured": false, 00:27:27.202 "data_offset": 0, 00:27:27.202 "data_size": 0 00:27:27.202 } 00:27:27.202 ] 00:27:27.202 }' 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:27.202 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.769 [2024-10-28 13:38:41.774460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:27.769 [2024-10-28 13:38:41.774533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:27.769 [2024-10-28 13:38:41.774562] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:27:27.769 [2024-10-28 13:38:41.774931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:27:27.769 [2024-10-28 13:38:41.775125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:27.769 [2024-10-28 13:38:41.775176] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:27:27.769 [2024-10-28 13:38:41.775476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:27.769 BaseBdev4 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.769 [ 00:27:27.769 { 00:27:27.769 "name": "BaseBdev4", 00:27:27.769 "aliases": [ 00:27:27.769 "05b2ecaf-91e0-4803-8a63-03aed68b1e63" 00:27:27.769 ], 00:27:27.769 "product_name": "Malloc disk", 00:27:27.769 "block_size": 512, 00:27:27.769 "num_blocks": 65536, 00:27:27.769 "uuid": "05b2ecaf-91e0-4803-8a63-03aed68b1e63", 00:27:27.769 "assigned_rate_limits": { 00:27:27.769 "rw_ios_per_sec": 0, 00:27:27.769 "rw_mbytes_per_sec": 0, 00:27:27.769 "r_mbytes_per_sec": 0, 00:27:27.769 "w_mbytes_per_sec": 0 00:27:27.769 }, 00:27:27.769 "claimed": true, 00:27:27.769 "claim_type": "exclusive_write", 00:27:27.769 "zoned": false, 00:27:27.769 "supported_io_types": { 00:27:27.769 "read": true, 00:27:27.769 "write": true, 00:27:27.769 "unmap": true, 00:27:27.769 "flush": true, 00:27:27.769 "reset": true, 00:27:27.769 "nvme_admin": false, 00:27:27.769 "nvme_io": false, 00:27:27.769 "nvme_io_md": false, 00:27:27.769 "write_zeroes": true, 00:27:27.769 "zcopy": true, 00:27:27.769 "get_zone_info": false, 00:27:27.769 "zone_management": false, 00:27:27.769 "zone_append": false, 00:27:27.769 "compare": false, 00:27:27.769 "compare_and_write": false, 00:27:27.769 "abort": true, 00:27:27.769 "seek_hole": false, 00:27:27.769 "seek_data": false, 00:27:27.769 "copy": true, 00:27:27.769 "nvme_iov_md": false 00:27:27.769 }, 00:27:27.769 "memory_domains": [ 00:27:27.769 { 00:27:27.769 "dma_device_id": "system", 00:27:27.769 "dma_device_type": 1 00:27:27.769 }, 00:27:27.769 { 00:27:27.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:27.769 "dma_device_type": 2 00:27:27.769 } 00:27:27.769 ], 00:27:27.769 "driver_specific": {} 00:27:27.769 } 00:27:27.769 ] 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:27.769 "name": "Existed_Raid", 00:27:27.769 "uuid": "bfb77ebd-b6df-4157-981a-732b13b340cc", 00:27:27.769 "strip_size_kb": 64, 00:27:27.769 "state": "online", 00:27:27.769 "raid_level": "concat", 00:27:27.769 "superblock": false, 00:27:27.769 "num_base_bdevs": 4, 00:27:27.769 "num_base_bdevs_discovered": 4, 00:27:27.769 "num_base_bdevs_operational": 4, 00:27:27.769 "base_bdevs_list": [ 00:27:27.769 { 00:27:27.769 "name": "BaseBdev1", 00:27:27.769 "uuid": "ecd68bb5-5fc0-4ffd-875a-6e78c4d2cbbd", 00:27:27.769 "is_configured": true, 00:27:27.769 "data_offset": 0, 00:27:27.769 "data_size": 65536 00:27:27.769 }, 00:27:27.769 { 00:27:27.769 "name": "BaseBdev2", 00:27:27.769 "uuid": "b81848e2-f483-45c5-b9df-e019ec0d3ea3", 00:27:27.769 "is_configured": true, 00:27:27.769 "data_offset": 0, 00:27:27.769 "data_size": 65536 00:27:27.769 }, 00:27:27.769 { 00:27:27.769 "name": "BaseBdev3", 00:27:27.769 "uuid": "7fd2a37e-1b95-483d-8ba8-44a526d4b959", 00:27:27.769 "is_configured": true, 00:27:27.769 "data_offset": 0, 00:27:27.769 "data_size": 65536 00:27:27.769 }, 00:27:27.769 { 00:27:27.769 "name": "BaseBdev4", 00:27:27.769 "uuid": "05b2ecaf-91e0-4803-8a63-03aed68b1e63", 00:27:27.769 "is_configured": true, 00:27:27.769 "data_offset": 0, 00:27:27.769 "data_size": 65536 00:27:27.769 } 00:27:27.769 ] 00:27:27.769 }' 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:27.769 13:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:28.364 [2024-10-28 13:38:42.379101] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:28.364 "name": "Existed_Raid", 00:27:28.364 "aliases": [ 00:27:28.364 "bfb77ebd-b6df-4157-981a-732b13b340cc" 00:27:28.364 ], 00:27:28.364 "product_name": "Raid Volume", 00:27:28.364 "block_size": 512, 00:27:28.364 "num_blocks": 262144, 00:27:28.364 "uuid": "bfb77ebd-b6df-4157-981a-732b13b340cc", 00:27:28.364 "assigned_rate_limits": { 00:27:28.364 "rw_ios_per_sec": 0, 00:27:28.364 "rw_mbytes_per_sec": 0, 00:27:28.364 "r_mbytes_per_sec": 0, 00:27:28.364 "w_mbytes_per_sec": 0 00:27:28.364 }, 00:27:28.364 "claimed": false, 00:27:28.364 "zoned": false, 00:27:28.364 "supported_io_types": { 00:27:28.364 "read": true, 00:27:28.364 "write": true, 00:27:28.364 "unmap": true, 00:27:28.364 "flush": true, 00:27:28.364 "reset": true, 00:27:28.364 "nvme_admin": false, 00:27:28.364 "nvme_io": false, 00:27:28.364 "nvme_io_md": false, 00:27:28.364 "write_zeroes": true, 00:27:28.364 "zcopy": false, 00:27:28.364 "get_zone_info": false, 00:27:28.364 "zone_management": false, 00:27:28.364 "zone_append": false, 00:27:28.364 "compare": false, 00:27:28.364 "compare_and_write": false, 00:27:28.364 "abort": false, 00:27:28.364 "seek_hole": false, 00:27:28.364 "seek_data": false, 00:27:28.364 "copy": false, 00:27:28.364 "nvme_iov_md": false 00:27:28.364 }, 00:27:28.364 "memory_domains": [ 00:27:28.364 { 00:27:28.364 "dma_device_id": "system", 00:27:28.364 "dma_device_type": 1 00:27:28.364 }, 00:27:28.364 { 00:27:28.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.364 "dma_device_type": 2 00:27:28.364 }, 00:27:28.364 { 00:27:28.364 "dma_device_id": "system", 00:27:28.364 "dma_device_type": 1 00:27:28.364 }, 00:27:28.364 { 00:27:28.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.364 "dma_device_type": 2 00:27:28.364 }, 00:27:28.364 { 00:27:28.364 "dma_device_id": "system", 00:27:28.364 "dma_device_type": 1 00:27:28.364 }, 00:27:28.364 { 00:27:28.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.364 "dma_device_type": 2 00:27:28.364 }, 00:27:28.364 { 00:27:28.364 "dma_device_id": "system", 00:27:28.364 "dma_device_type": 1 00:27:28.364 }, 00:27:28.364 { 00:27:28.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.364 "dma_device_type": 2 00:27:28.364 } 00:27:28.364 ], 00:27:28.364 "driver_specific": { 00:27:28.364 "raid": { 00:27:28.364 "uuid": "bfb77ebd-b6df-4157-981a-732b13b340cc", 00:27:28.364 "strip_size_kb": 64, 00:27:28.364 "state": "online", 00:27:28.364 "raid_level": "concat", 00:27:28.364 "superblock": false, 00:27:28.364 "num_base_bdevs": 4, 00:27:28.364 "num_base_bdevs_discovered": 4, 00:27:28.364 "num_base_bdevs_operational": 4, 00:27:28.364 "base_bdevs_list": [ 00:27:28.364 { 00:27:28.364 "name": "BaseBdev1", 00:27:28.364 "uuid": "ecd68bb5-5fc0-4ffd-875a-6e78c4d2cbbd", 00:27:28.364 "is_configured": true, 00:27:28.364 "data_offset": 0, 00:27:28.364 "data_size": 65536 00:27:28.364 }, 00:27:28.364 { 00:27:28.364 "name": "BaseBdev2", 00:27:28.364 "uuid": "b81848e2-f483-45c5-b9df-e019ec0d3ea3", 00:27:28.364 "is_configured": true, 00:27:28.364 "data_offset": 0, 00:27:28.364 "data_size": 65536 00:27:28.364 }, 00:27:28.364 { 00:27:28.364 "name": "BaseBdev3", 00:27:28.364 "uuid": "7fd2a37e-1b95-483d-8ba8-44a526d4b959", 00:27:28.364 "is_configured": true, 00:27:28.364 "data_offset": 0, 00:27:28.364 "data_size": 65536 00:27:28.364 }, 00:27:28.364 { 00:27:28.364 "name": "BaseBdev4", 00:27:28.364 "uuid": "05b2ecaf-91e0-4803-8a63-03aed68b1e63", 00:27:28.364 "is_configured": true, 00:27:28.364 "data_offset": 0, 00:27:28.364 "data_size": 65536 00:27:28.364 } 00:27:28.364 ] 00:27:28.364 } 00:27:28.364 } 00:27:28.364 }' 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:28.364 BaseBdev2 00:27:28.364 BaseBdev3 00:27:28.364 BaseBdev4' 00:27:28.364 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.623 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.623 [2024-10-28 13:38:42.766838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:28.623 [2024-10-28 13:38:42.766870] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:28.623 [2024-10-28 13:38:42.766950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:28.881 "name": "Existed_Raid", 00:27:28.881 "uuid": "bfb77ebd-b6df-4157-981a-732b13b340cc", 00:27:28.881 "strip_size_kb": 64, 00:27:28.881 "state": "offline", 00:27:28.881 "raid_level": "concat", 00:27:28.881 "superblock": false, 00:27:28.881 "num_base_bdevs": 4, 00:27:28.881 "num_base_bdevs_discovered": 3, 00:27:28.881 "num_base_bdevs_operational": 3, 00:27:28.881 "base_bdevs_list": [ 00:27:28.881 { 00:27:28.881 "name": null, 00:27:28.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.881 "is_configured": false, 00:27:28.881 "data_offset": 0, 00:27:28.881 "data_size": 65536 00:27:28.881 }, 00:27:28.881 { 00:27:28.881 "name": "BaseBdev2", 00:27:28.881 "uuid": "b81848e2-f483-45c5-b9df-e019ec0d3ea3", 00:27:28.881 "is_configured": true, 00:27:28.881 "data_offset": 0, 00:27:28.881 "data_size": 65536 00:27:28.881 }, 00:27:28.881 { 00:27:28.881 "name": "BaseBdev3", 00:27:28.881 "uuid": "7fd2a37e-1b95-483d-8ba8-44a526d4b959", 00:27:28.881 "is_configured": true, 00:27:28.881 "data_offset": 0, 00:27:28.881 "data_size": 65536 00:27:28.881 }, 00:27:28.881 { 00:27:28.881 "name": "BaseBdev4", 00:27:28.881 "uuid": "05b2ecaf-91e0-4803-8a63-03aed68b1e63", 00:27:28.881 "is_configured": true, 00:27:28.881 "data_offset": 0, 00:27:28.881 "data_size": 65536 00:27:28.881 } 00:27:28.881 ] 00:27:28.881 }' 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:28.881 13:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.446 [2024-10-28 13:38:43.405968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.446 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.446 [2024-10-28 13:38:43.492071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.447 [2024-10-28 13:38:43.570253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:29.447 [2024-10-28 13:38:43.570346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.447 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.706 BaseBdev2 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.706 [ 00:27:29.706 { 00:27:29.706 "name": "BaseBdev2", 00:27:29.706 "aliases": [ 00:27:29.706 "391893e8-b7d2-4e2b-99a6-d966608d9b9b" 00:27:29.706 ], 00:27:29.706 "product_name": "Malloc disk", 00:27:29.706 "block_size": 512, 00:27:29.706 "num_blocks": 65536, 00:27:29.706 "uuid": "391893e8-b7d2-4e2b-99a6-d966608d9b9b", 00:27:29.706 "assigned_rate_limits": { 00:27:29.706 "rw_ios_per_sec": 0, 00:27:29.706 "rw_mbytes_per_sec": 0, 00:27:29.706 "r_mbytes_per_sec": 0, 00:27:29.706 "w_mbytes_per_sec": 0 00:27:29.706 }, 00:27:29.706 "claimed": false, 00:27:29.706 "zoned": false, 00:27:29.706 "supported_io_types": { 00:27:29.706 "read": true, 00:27:29.706 "write": true, 00:27:29.706 "unmap": true, 00:27:29.706 "flush": true, 00:27:29.706 "reset": true, 00:27:29.706 "nvme_admin": false, 00:27:29.706 "nvme_io": false, 00:27:29.706 "nvme_io_md": false, 00:27:29.706 "write_zeroes": true, 00:27:29.706 "zcopy": true, 00:27:29.706 "get_zone_info": false, 00:27:29.706 "zone_management": false, 00:27:29.706 "zone_append": false, 00:27:29.706 "compare": false, 00:27:29.706 "compare_and_write": false, 00:27:29.706 "abort": true, 00:27:29.706 "seek_hole": false, 00:27:29.706 "seek_data": false, 00:27:29.706 "copy": true, 00:27:29.706 "nvme_iov_md": false 00:27:29.706 }, 00:27:29.706 "memory_domains": [ 00:27:29.706 { 00:27:29.706 "dma_device_id": "system", 00:27:29.706 "dma_device_type": 1 00:27:29.706 }, 00:27:29.706 { 00:27:29.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:29.706 "dma_device_type": 2 00:27:29.706 } 00:27:29.706 ], 00:27:29.706 "driver_specific": {} 00:27:29.706 } 00:27:29.706 ] 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.706 BaseBdev3 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:29.706 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.707 [ 00:27:29.707 { 00:27:29.707 "name": "BaseBdev3", 00:27:29.707 "aliases": [ 00:27:29.707 "309cfbfb-d4db-4764-85a1-79337117ce8b" 00:27:29.707 ], 00:27:29.707 "product_name": "Malloc disk", 00:27:29.707 "block_size": 512, 00:27:29.707 "num_blocks": 65536, 00:27:29.707 "uuid": "309cfbfb-d4db-4764-85a1-79337117ce8b", 00:27:29.707 "assigned_rate_limits": { 00:27:29.707 "rw_ios_per_sec": 0, 00:27:29.707 "rw_mbytes_per_sec": 0, 00:27:29.707 "r_mbytes_per_sec": 0, 00:27:29.707 "w_mbytes_per_sec": 0 00:27:29.707 }, 00:27:29.707 "claimed": false, 00:27:29.707 "zoned": false, 00:27:29.707 "supported_io_types": { 00:27:29.707 "read": true, 00:27:29.707 "write": true, 00:27:29.707 "unmap": true, 00:27:29.707 "flush": true, 00:27:29.707 "reset": true, 00:27:29.707 "nvme_admin": false, 00:27:29.707 "nvme_io": false, 00:27:29.707 "nvme_io_md": false, 00:27:29.707 "write_zeroes": true, 00:27:29.707 "zcopy": true, 00:27:29.707 "get_zone_info": false, 00:27:29.707 "zone_management": false, 00:27:29.707 "zone_append": false, 00:27:29.707 "compare": false, 00:27:29.707 "compare_and_write": false, 00:27:29.707 "abort": true, 00:27:29.707 "seek_hole": false, 00:27:29.707 "seek_data": false, 00:27:29.707 "copy": true, 00:27:29.707 "nvme_iov_md": false 00:27:29.707 }, 00:27:29.707 "memory_domains": [ 00:27:29.707 { 00:27:29.707 "dma_device_id": "system", 00:27:29.707 "dma_device_type": 1 00:27:29.707 }, 00:27:29.707 { 00:27:29.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:29.707 "dma_device_type": 2 00:27:29.707 } 00:27:29.707 ], 00:27:29.707 "driver_specific": {} 00:27:29.707 } 00:27:29.707 ] 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.707 BaseBdev4 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.707 [ 00:27:29.707 { 00:27:29.707 "name": "BaseBdev4", 00:27:29.707 "aliases": [ 00:27:29.707 "9d5cae16-0e7a-45ab-8b5a-da926e6b675d" 00:27:29.707 ], 00:27:29.707 "product_name": "Malloc disk", 00:27:29.707 "block_size": 512, 00:27:29.707 "num_blocks": 65536, 00:27:29.707 "uuid": "9d5cae16-0e7a-45ab-8b5a-da926e6b675d", 00:27:29.707 "assigned_rate_limits": { 00:27:29.707 "rw_ios_per_sec": 0, 00:27:29.707 "rw_mbytes_per_sec": 0, 00:27:29.707 "r_mbytes_per_sec": 0, 00:27:29.707 "w_mbytes_per_sec": 0 00:27:29.707 }, 00:27:29.707 "claimed": false, 00:27:29.707 "zoned": false, 00:27:29.707 "supported_io_types": { 00:27:29.707 "read": true, 00:27:29.707 "write": true, 00:27:29.707 "unmap": true, 00:27:29.707 "flush": true, 00:27:29.707 "reset": true, 00:27:29.707 "nvme_admin": false, 00:27:29.707 "nvme_io": false, 00:27:29.707 "nvme_io_md": false, 00:27:29.707 "write_zeroes": true, 00:27:29.707 "zcopy": true, 00:27:29.707 "get_zone_info": false, 00:27:29.707 "zone_management": false, 00:27:29.707 "zone_append": false, 00:27:29.707 "compare": false, 00:27:29.707 "compare_and_write": false, 00:27:29.707 "abort": true, 00:27:29.707 "seek_hole": false, 00:27:29.707 "seek_data": false, 00:27:29.707 "copy": true, 00:27:29.707 "nvme_iov_md": false 00:27:29.707 }, 00:27:29.707 "memory_domains": [ 00:27:29.707 { 00:27:29.707 "dma_device_id": "system", 00:27:29.707 "dma_device_type": 1 00:27:29.707 }, 00:27:29.707 { 00:27:29.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:29.707 "dma_device_type": 2 00:27:29.707 } 00:27:29.707 ], 00:27:29.707 "driver_specific": {} 00:27:29.707 } 00:27:29.707 ] 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.707 [2024-10-28 13:38:43.807897] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:29.707 [2024-10-28 13:38:43.808003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:29.707 [2024-10-28 13:38:43.808045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:29.707 [2024-10-28 13:38:43.810821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:29.707 [2024-10-28 13:38:43.811130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:29.707 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.966 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:29.966 "name": "Existed_Raid", 00:27:29.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.966 "strip_size_kb": 64, 00:27:29.966 "state": "configuring", 00:27:29.966 "raid_level": "concat", 00:27:29.966 "superblock": false, 00:27:29.966 "num_base_bdevs": 4, 00:27:29.966 "num_base_bdevs_discovered": 3, 00:27:29.966 "num_base_bdevs_operational": 4, 00:27:29.966 "base_bdevs_list": [ 00:27:29.966 { 00:27:29.966 "name": "BaseBdev1", 00:27:29.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.966 "is_configured": false, 00:27:29.966 "data_offset": 0, 00:27:29.966 "data_size": 0 00:27:29.966 }, 00:27:29.966 { 00:27:29.966 "name": "BaseBdev2", 00:27:29.966 "uuid": "391893e8-b7d2-4e2b-99a6-d966608d9b9b", 00:27:29.966 "is_configured": true, 00:27:29.966 "data_offset": 0, 00:27:29.966 "data_size": 65536 00:27:29.966 }, 00:27:29.966 { 00:27:29.966 "name": "BaseBdev3", 00:27:29.966 "uuid": "309cfbfb-d4db-4764-85a1-79337117ce8b", 00:27:29.966 "is_configured": true, 00:27:29.966 "data_offset": 0, 00:27:29.966 "data_size": 65536 00:27:29.966 }, 00:27:29.966 { 00:27:29.966 "name": "BaseBdev4", 00:27:29.966 "uuid": "9d5cae16-0e7a-45ab-8b5a-da926e6b675d", 00:27:29.966 "is_configured": true, 00:27:29.966 "data_offset": 0, 00:27:29.966 "data_size": 65536 00:27:29.966 } 00:27:29.966 ] 00:27:29.966 }' 00:27:29.966 13:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:29.966 13:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.224 [2024-10-28 13:38:44.360193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.224 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:30.482 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.482 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:30.482 "name": "Existed_Raid", 00:27:30.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.482 "strip_size_kb": 64, 00:27:30.482 "state": "configuring", 00:27:30.482 "raid_level": "concat", 00:27:30.482 "superblock": false, 00:27:30.482 "num_base_bdevs": 4, 00:27:30.482 "num_base_bdevs_discovered": 2, 00:27:30.482 "num_base_bdevs_operational": 4, 00:27:30.482 "base_bdevs_list": [ 00:27:30.482 { 00:27:30.482 "name": "BaseBdev1", 00:27:30.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.482 "is_configured": false, 00:27:30.482 "data_offset": 0, 00:27:30.482 "data_size": 0 00:27:30.482 }, 00:27:30.482 { 00:27:30.482 "name": null, 00:27:30.482 "uuid": "391893e8-b7d2-4e2b-99a6-d966608d9b9b", 00:27:30.482 "is_configured": false, 00:27:30.482 "data_offset": 0, 00:27:30.482 "data_size": 65536 00:27:30.482 }, 00:27:30.482 { 00:27:30.482 "name": "BaseBdev3", 00:27:30.482 "uuid": "309cfbfb-d4db-4764-85a1-79337117ce8b", 00:27:30.482 "is_configured": true, 00:27:30.482 "data_offset": 0, 00:27:30.482 "data_size": 65536 00:27:30.482 }, 00:27:30.482 { 00:27:30.482 "name": "BaseBdev4", 00:27:30.482 "uuid": "9d5cae16-0e7a-45ab-8b5a-da926e6b675d", 00:27:30.482 "is_configured": true, 00:27:30.482 "data_offset": 0, 00:27:30.482 "data_size": 65536 00:27:30.482 } 00:27:30.482 ] 00:27:30.482 }' 00:27:30.482 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:30.482 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.048 [2024-10-28 13:38:44.990499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:31.048 BaseBdev1 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.048 13:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.048 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.048 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:31.048 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.048 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.048 [ 00:27:31.048 { 00:27:31.048 "name": "BaseBdev1", 00:27:31.048 "aliases": [ 00:27:31.048 "d6b875eb-921e-4f4e-9777-d5278f813104" 00:27:31.048 ], 00:27:31.048 "product_name": "Malloc disk", 00:27:31.048 "block_size": 512, 00:27:31.048 "num_blocks": 65536, 00:27:31.048 "uuid": "d6b875eb-921e-4f4e-9777-d5278f813104", 00:27:31.048 "assigned_rate_limits": { 00:27:31.049 "rw_ios_per_sec": 0, 00:27:31.049 "rw_mbytes_per_sec": 0, 00:27:31.049 "r_mbytes_per_sec": 0, 00:27:31.049 "w_mbytes_per_sec": 0 00:27:31.049 }, 00:27:31.049 "claimed": true, 00:27:31.049 "claim_type": "exclusive_write", 00:27:31.049 "zoned": false, 00:27:31.049 "supported_io_types": { 00:27:31.049 "read": true, 00:27:31.049 "write": true, 00:27:31.049 "unmap": true, 00:27:31.049 "flush": true, 00:27:31.049 "reset": true, 00:27:31.049 "nvme_admin": false, 00:27:31.049 "nvme_io": false, 00:27:31.049 "nvme_io_md": false, 00:27:31.049 "write_zeroes": true, 00:27:31.049 "zcopy": true, 00:27:31.049 "get_zone_info": false, 00:27:31.049 "zone_management": false, 00:27:31.049 "zone_append": false, 00:27:31.049 "compare": false, 00:27:31.049 "compare_and_write": false, 00:27:31.049 "abort": true, 00:27:31.049 "seek_hole": false, 00:27:31.049 "seek_data": false, 00:27:31.049 "copy": true, 00:27:31.049 "nvme_iov_md": false 00:27:31.049 }, 00:27:31.049 "memory_domains": [ 00:27:31.049 { 00:27:31.049 "dma_device_id": "system", 00:27:31.049 "dma_device_type": 1 00:27:31.049 }, 00:27:31.049 { 00:27:31.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:31.049 "dma_device_type": 2 00:27:31.049 } 00:27:31.049 ], 00:27:31.049 "driver_specific": {} 00:27:31.049 } 00:27:31.049 ] 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:31.049 "name": "Existed_Raid", 00:27:31.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.049 "strip_size_kb": 64, 00:27:31.049 "state": "configuring", 00:27:31.049 "raid_level": "concat", 00:27:31.049 "superblock": false, 00:27:31.049 "num_base_bdevs": 4, 00:27:31.049 "num_base_bdevs_discovered": 3, 00:27:31.049 "num_base_bdevs_operational": 4, 00:27:31.049 "base_bdevs_list": [ 00:27:31.049 { 00:27:31.049 "name": "BaseBdev1", 00:27:31.049 "uuid": "d6b875eb-921e-4f4e-9777-d5278f813104", 00:27:31.049 "is_configured": true, 00:27:31.049 "data_offset": 0, 00:27:31.049 "data_size": 65536 00:27:31.049 }, 00:27:31.049 { 00:27:31.049 "name": null, 00:27:31.049 "uuid": "391893e8-b7d2-4e2b-99a6-d966608d9b9b", 00:27:31.049 "is_configured": false, 00:27:31.049 "data_offset": 0, 00:27:31.049 "data_size": 65536 00:27:31.049 }, 00:27:31.049 { 00:27:31.049 "name": "BaseBdev3", 00:27:31.049 "uuid": "309cfbfb-d4db-4764-85a1-79337117ce8b", 00:27:31.049 "is_configured": true, 00:27:31.049 "data_offset": 0, 00:27:31.049 "data_size": 65536 00:27:31.049 }, 00:27:31.049 { 00:27:31.049 "name": "BaseBdev4", 00:27:31.049 "uuid": "9d5cae16-0e7a-45ab-8b5a-da926e6b675d", 00:27:31.049 "is_configured": true, 00:27:31.049 "data_offset": 0, 00:27:31.049 "data_size": 65536 00:27:31.049 } 00:27:31.049 ] 00:27:31.049 }' 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:31.049 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.615 [2024-10-28 13:38:45.646799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.615 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:31.615 "name": "Existed_Raid", 00:27:31.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.615 "strip_size_kb": 64, 00:27:31.615 "state": "configuring", 00:27:31.615 "raid_level": "concat", 00:27:31.615 "superblock": false, 00:27:31.615 "num_base_bdevs": 4, 00:27:31.615 "num_base_bdevs_discovered": 2, 00:27:31.615 "num_base_bdevs_operational": 4, 00:27:31.615 "base_bdevs_list": [ 00:27:31.615 { 00:27:31.615 "name": "BaseBdev1", 00:27:31.616 "uuid": "d6b875eb-921e-4f4e-9777-d5278f813104", 00:27:31.616 "is_configured": true, 00:27:31.616 "data_offset": 0, 00:27:31.616 "data_size": 65536 00:27:31.616 }, 00:27:31.616 { 00:27:31.616 "name": null, 00:27:31.616 "uuid": "391893e8-b7d2-4e2b-99a6-d966608d9b9b", 00:27:31.616 "is_configured": false, 00:27:31.616 "data_offset": 0, 00:27:31.616 "data_size": 65536 00:27:31.616 }, 00:27:31.616 { 00:27:31.616 "name": null, 00:27:31.616 "uuid": "309cfbfb-d4db-4764-85a1-79337117ce8b", 00:27:31.616 "is_configured": false, 00:27:31.616 "data_offset": 0, 00:27:31.616 "data_size": 65536 00:27:31.616 }, 00:27:31.616 { 00:27:31.616 "name": "BaseBdev4", 00:27:31.616 "uuid": "9d5cae16-0e7a-45ab-8b5a-da926e6b675d", 00:27:31.616 "is_configured": true, 00:27:31.616 "data_offset": 0, 00:27:31.616 "data_size": 65536 00:27:31.616 } 00:27:31.616 ] 00:27:31.616 }' 00:27:31.616 13:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:31.616 13:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.181 [2024-10-28 13:38:46.259074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.181 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:32.181 "name": "Existed_Raid", 00:27:32.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.181 "strip_size_kb": 64, 00:27:32.181 "state": "configuring", 00:27:32.181 "raid_level": "concat", 00:27:32.181 "superblock": false, 00:27:32.181 "num_base_bdevs": 4, 00:27:32.181 "num_base_bdevs_discovered": 3, 00:27:32.181 "num_base_bdevs_operational": 4, 00:27:32.181 "base_bdevs_list": [ 00:27:32.181 { 00:27:32.182 "name": "BaseBdev1", 00:27:32.182 "uuid": "d6b875eb-921e-4f4e-9777-d5278f813104", 00:27:32.182 "is_configured": true, 00:27:32.182 "data_offset": 0, 00:27:32.182 "data_size": 65536 00:27:32.182 }, 00:27:32.182 { 00:27:32.182 "name": null, 00:27:32.182 "uuid": "391893e8-b7d2-4e2b-99a6-d966608d9b9b", 00:27:32.182 "is_configured": false, 00:27:32.182 "data_offset": 0, 00:27:32.182 "data_size": 65536 00:27:32.182 }, 00:27:32.182 { 00:27:32.182 "name": "BaseBdev3", 00:27:32.182 "uuid": "309cfbfb-d4db-4764-85a1-79337117ce8b", 00:27:32.182 "is_configured": true, 00:27:32.182 "data_offset": 0, 00:27:32.182 "data_size": 65536 00:27:32.182 }, 00:27:32.182 { 00:27:32.182 "name": "BaseBdev4", 00:27:32.182 "uuid": "9d5cae16-0e7a-45ab-8b5a-da926e6b675d", 00:27:32.182 "is_configured": true, 00:27:32.182 "data_offset": 0, 00:27:32.182 "data_size": 65536 00:27:32.182 } 00:27:32.182 ] 00:27:32.182 }' 00:27:32.182 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:32.182 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.747 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.747 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.747 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:32.747 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.747 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.747 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:27:32.747 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:32.747 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.747 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.747 [2024-10-28 13:38:46.899384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.004 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:33.004 "name": "Existed_Raid", 00:27:33.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.004 "strip_size_kb": 64, 00:27:33.004 "state": "configuring", 00:27:33.004 "raid_level": "concat", 00:27:33.004 "superblock": false, 00:27:33.005 "num_base_bdevs": 4, 00:27:33.005 "num_base_bdevs_discovered": 2, 00:27:33.005 "num_base_bdevs_operational": 4, 00:27:33.005 "base_bdevs_list": [ 00:27:33.005 { 00:27:33.005 "name": null, 00:27:33.005 "uuid": "d6b875eb-921e-4f4e-9777-d5278f813104", 00:27:33.005 "is_configured": false, 00:27:33.005 "data_offset": 0, 00:27:33.005 "data_size": 65536 00:27:33.005 }, 00:27:33.005 { 00:27:33.005 "name": null, 00:27:33.005 "uuid": "391893e8-b7d2-4e2b-99a6-d966608d9b9b", 00:27:33.005 "is_configured": false, 00:27:33.005 "data_offset": 0, 00:27:33.005 "data_size": 65536 00:27:33.005 }, 00:27:33.005 { 00:27:33.005 "name": "BaseBdev3", 00:27:33.005 "uuid": "309cfbfb-d4db-4764-85a1-79337117ce8b", 00:27:33.005 "is_configured": true, 00:27:33.005 "data_offset": 0, 00:27:33.005 "data_size": 65536 00:27:33.005 }, 00:27:33.005 { 00:27:33.005 "name": "BaseBdev4", 00:27:33.005 "uuid": "9d5cae16-0e7a-45ab-8b5a-da926e6b675d", 00:27:33.005 "is_configured": true, 00:27:33.005 "data_offset": 0, 00:27:33.005 "data_size": 65536 00:27:33.005 } 00:27:33.005 ] 00:27:33.005 }' 00:27:33.005 13:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:33.005 13:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.570 [2024-10-28 13:38:47.526979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:33.570 "name": "Existed_Raid", 00:27:33.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.570 "strip_size_kb": 64, 00:27:33.570 "state": "configuring", 00:27:33.570 "raid_level": "concat", 00:27:33.570 "superblock": false, 00:27:33.570 "num_base_bdevs": 4, 00:27:33.570 "num_base_bdevs_discovered": 3, 00:27:33.570 "num_base_bdevs_operational": 4, 00:27:33.570 "base_bdevs_list": [ 00:27:33.570 { 00:27:33.570 "name": null, 00:27:33.570 "uuid": "d6b875eb-921e-4f4e-9777-d5278f813104", 00:27:33.570 "is_configured": false, 00:27:33.570 "data_offset": 0, 00:27:33.570 "data_size": 65536 00:27:33.570 }, 00:27:33.570 { 00:27:33.570 "name": "BaseBdev2", 00:27:33.570 "uuid": "391893e8-b7d2-4e2b-99a6-d966608d9b9b", 00:27:33.570 "is_configured": true, 00:27:33.570 "data_offset": 0, 00:27:33.570 "data_size": 65536 00:27:33.570 }, 00:27:33.570 { 00:27:33.570 "name": "BaseBdev3", 00:27:33.570 "uuid": "309cfbfb-d4db-4764-85a1-79337117ce8b", 00:27:33.570 "is_configured": true, 00:27:33.570 "data_offset": 0, 00:27:33.570 "data_size": 65536 00:27:33.570 }, 00:27:33.570 { 00:27:33.570 "name": "BaseBdev4", 00:27:33.570 "uuid": "9d5cae16-0e7a-45ab-8b5a-da926e6b675d", 00:27:33.570 "is_configured": true, 00:27:33.570 "data_offset": 0, 00:27:33.570 "data_size": 65536 00:27:33.570 } 00:27:33.570 ] 00:27:33.570 }' 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:33.570 13:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d6b875eb-921e-4f4e-9777-d5278f813104 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.135 [2024-10-28 13:38:48.221038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:34.135 [2024-10-28 13:38:48.221390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:34.135 [2024-10-28 13:38:48.221425] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:27:34.135 [2024-10-28 13:38:48.221816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:27:34.135 [2024-10-28 13:38:48.221997] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:34.135 [2024-10-28 13:38:48.222012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:34.135 [2024-10-28 13:38:48.222318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:34.135 NewBaseBdev 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.135 [ 00:27:34.135 { 00:27:34.135 "name": "NewBaseBdev", 00:27:34.135 "aliases": [ 00:27:34.135 "d6b875eb-921e-4f4e-9777-d5278f813104" 00:27:34.135 ], 00:27:34.135 "product_name": "Malloc disk", 00:27:34.135 "block_size": 512, 00:27:34.135 "num_blocks": 65536, 00:27:34.135 "uuid": "d6b875eb-921e-4f4e-9777-d5278f813104", 00:27:34.135 "assigned_rate_limits": { 00:27:34.135 "rw_ios_per_sec": 0, 00:27:34.135 "rw_mbytes_per_sec": 0, 00:27:34.135 "r_mbytes_per_sec": 0, 00:27:34.135 "w_mbytes_per_sec": 0 00:27:34.135 }, 00:27:34.135 "claimed": true, 00:27:34.135 "claim_type": "exclusive_write", 00:27:34.135 "zoned": false, 00:27:34.135 "supported_io_types": { 00:27:34.135 "read": true, 00:27:34.135 "write": true, 00:27:34.135 "unmap": true, 00:27:34.135 "flush": true, 00:27:34.135 "reset": true, 00:27:34.135 "nvme_admin": false, 00:27:34.135 "nvme_io": false, 00:27:34.135 "nvme_io_md": false, 00:27:34.135 "write_zeroes": true, 00:27:34.135 "zcopy": true, 00:27:34.135 "get_zone_info": false, 00:27:34.135 "zone_management": false, 00:27:34.135 "zone_append": false, 00:27:34.135 "compare": false, 00:27:34.135 "compare_and_write": false, 00:27:34.135 "abort": true, 00:27:34.135 "seek_hole": false, 00:27:34.135 "seek_data": false, 00:27:34.135 "copy": true, 00:27:34.135 "nvme_iov_md": false 00:27:34.135 }, 00:27:34.135 "memory_domains": [ 00:27:34.135 { 00:27:34.135 "dma_device_id": "system", 00:27:34.135 "dma_device_type": 1 00:27:34.135 }, 00:27:34.135 { 00:27:34.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:34.135 "dma_device_type": 2 00:27:34.135 } 00:27:34.135 ], 00:27:34.135 "driver_specific": {} 00:27:34.135 } 00:27:34.135 ] 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.135 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.393 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:34.393 "name": "Existed_Raid", 00:27:34.393 "uuid": "c84f456c-c54c-41a7-a30d-e3ddabca3a16", 00:27:34.393 "strip_size_kb": 64, 00:27:34.393 "state": "online", 00:27:34.393 "raid_level": "concat", 00:27:34.393 "superblock": false, 00:27:34.393 "num_base_bdevs": 4, 00:27:34.393 "num_base_bdevs_discovered": 4, 00:27:34.393 "num_base_bdevs_operational": 4, 00:27:34.393 "base_bdevs_list": [ 00:27:34.393 { 00:27:34.393 "name": "NewBaseBdev", 00:27:34.393 "uuid": "d6b875eb-921e-4f4e-9777-d5278f813104", 00:27:34.393 "is_configured": true, 00:27:34.393 "data_offset": 0, 00:27:34.393 "data_size": 65536 00:27:34.393 }, 00:27:34.393 { 00:27:34.393 "name": "BaseBdev2", 00:27:34.393 "uuid": "391893e8-b7d2-4e2b-99a6-d966608d9b9b", 00:27:34.393 "is_configured": true, 00:27:34.393 "data_offset": 0, 00:27:34.393 "data_size": 65536 00:27:34.393 }, 00:27:34.393 { 00:27:34.393 "name": "BaseBdev3", 00:27:34.393 "uuid": "309cfbfb-d4db-4764-85a1-79337117ce8b", 00:27:34.393 "is_configured": true, 00:27:34.393 "data_offset": 0, 00:27:34.393 "data_size": 65536 00:27:34.393 }, 00:27:34.393 { 00:27:34.393 "name": "BaseBdev4", 00:27:34.393 "uuid": "9d5cae16-0e7a-45ab-8b5a-da926e6b675d", 00:27:34.393 "is_configured": true, 00:27:34.393 "data_offset": 0, 00:27:34.393 "data_size": 65536 00:27:34.393 } 00:27:34.393 ] 00:27:34.393 }' 00:27:34.393 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:34.393 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.651 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:27:34.651 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:34.651 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:34.651 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:34.651 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:34.651 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:34.651 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:34.651 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:34.651 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.651 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.651 [2024-10-28 13:38:48.781848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:34.908 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.908 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:34.908 "name": "Existed_Raid", 00:27:34.908 "aliases": [ 00:27:34.908 "c84f456c-c54c-41a7-a30d-e3ddabca3a16" 00:27:34.908 ], 00:27:34.908 "product_name": "Raid Volume", 00:27:34.908 "block_size": 512, 00:27:34.908 "num_blocks": 262144, 00:27:34.908 "uuid": "c84f456c-c54c-41a7-a30d-e3ddabca3a16", 00:27:34.908 "assigned_rate_limits": { 00:27:34.908 "rw_ios_per_sec": 0, 00:27:34.908 "rw_mbytes_per_sec": 0, 00:27:34.908 "r_mbytes_per_sec": 0, 00:27:34.908 "w_mbytes_per_sec": 0 00:27:34.908 }, 00:27:34.908 "claimed": false, 00:27:34.908 "zoned": false, 00:27:34.908 "supported_io_types": { 00:27:34.908 "read": true, 00:27:34.908 "write": true, 00:27:34.908 "unmap": true, 00:27:34.908 "flush": true, 00:27:34.908 "reset": true, 00:27:34.908 "nvme_admin": false, 00:27:34.908 "nvme_io": false, 00:27:34.908 "nvme_io_md": false, 00:27:34.908 "write_zeroes": true, 00:27:34.908 "zcopy": false, 00:27:34.908 "get_zone_info": false, 00:27:34.908 "zone_management": false, 00:27:34.908 "zone_append": false, 00:27:34.908 "compare": false, 00:27:34.908 "compare_and_write": false, 00:27:34.908 "abort": false, 00:27:34.908 "seek_hole": false, 00:27:34.908 "seek_data": false, 00:27:34.908 "copy": false, 00:27:34.908 "nvme_iov_md": false 00:27:34.908 }, 00:27:34.908 "memory_domains": [ 00:27:34.908 { 00:27:34.908 "dma_device_id": "system", 00:27:34.908 "dma_device_type": 1 00:27:34.908 }, 00:27:34.908 { 00:27:34.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:34.908 "dma_device_type": 2 00:27:34.908 }, 00:27:34.908 { 00:27:34.908 "dma_device_id": "system", 00:27:34.908 "dma_device_type": 1 00:27:34.908 }, 00:27:34.908 { 00:27:34.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:34.908 "dma_device_type": 2 00:27:34.908 }, 00:27:34.908 { 00:27:34.908 "dma_device_id": "system", 00:27:34.908 "dma_device_type": 1 00:27:34.908 }, 00:27:34.908 { 00:27:34.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:34.908 "dma_device_type": 2 00:27:34.908 }, 00:27:34.908 { 00:27:34.908 "dma_device_id": "system", 00:27:34.908 "dma_device_type": 1 00:27:34.908 }, 00:27:34.908 { 00:27:34.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:34.908 "dma_device_type": 2 00:27:34.908 } 00:27:34.908 ], 00:27:34.908 "driver_specific": { 00:27:34.908 "raid": { 00:27:34.908 "uuid": "c84f456c-c54c-41a7-a30d-e3ddabca3a16", 00:27:34.908 "strip_size_kb": 64, 00:27:34.908 "state": "online", 00:27:34.908 "raid_level": "concat", 00:27:34.908 "superblock": false, 00:27:34.908 "num_base_bdevs": 4, 00:27:34.908 "num_base_bdevs_discovered": 4, 00:27:34.908 "num_base_bdevs_operational": 4, 00:27:34.908 "base_bdevs_list": [ 00:27:34.908 { 00:27:34.908 "name": "NewBaseBdev", 00:27:34.908 "uuid": "d6b875eb-921e-4f4e-9777-d5278f813104", 00:27:34.908 "is_configured": true, 00:27:34.908 "data_offset": 0, 00:27:34.908 "data_size": 65536 00:27:34.908 }, 00:27:34.908 { 00:27:34.908 "name": "BaseBdev2", 00:27:34.908 "uuid": "391893e8-b7d2-4e2b-99a6-d966608d9b9b", 00:27:34.908 "is_configured": true, 00:27:34.908 "data_offset": 0, 00:27:34.908 "data_size": 65536 00:27:34.908 }, 00:27:34.908 { 00:27:34.908 "name": "BaseBdev3", 00:27:34.908 "uuid": "309cfbfb-d4db-4764-85a1-79337117ce8b", 00:27:34.908 "is_configured": true, 00:27:34.908 "data_offset": 0, 00:27:34.908 "data_size": 65536 00:27:34.908 }, 00:27:34.908 { 00:27:34.908 "name": "BaseBdev4", 00:27:34.908 "uuid": "9d5cae16-0e7a-45ab-8b5a-da926e6b675d", 00:27:34.908 "is_configured": true, 00:27:34.908 "data_offset": 0, 00:27:34.908 "data_size": 65536 00:27:34.908 } 00:27:34.908 ] 00:27:34.908 } 00:27:34.908 } 00:27:34.908 }' 00:27:34.908 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:34.908 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:27:34.908 BaseBdev2 00:27:34.908 BaseBdev3 00:27:34.908 BaseBdev4' 00:27:34.908 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:34.908 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:34.908 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:34.909 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:34.909 13:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:27:34.909 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.909 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.909 13:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.909 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:34.909 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:34.909 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:34.909 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:34.909 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:34.909 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.909 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.909 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.167 [2024-10-28 13:38:49.205465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:35.167 [2024-10-28 13:38:49.205505] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:35.167 [2024-10-28 13:38:49.205649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:35.167 [2024-10-28 13:38:49.205829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:35.167 [2024-10-28 13:38:49.205914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84020 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 84020 ']' 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 84020 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84020 00:27:35.167 killing process with pid 84020 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84020' 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 84020 00:27:35.167 [2024-10-28 13:38:49.248978] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:35.167 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 84020 00:27:35.167 [2024-10-28 13:38:49.303731] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:27:35.732 00:27:35.732 real 0m11.837s 00:27:35.732 user 0m20.808s 00:27:35.732 sys 0m1.813s 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:35.732 ************************************ 00:27:35.732 END TEST raid_state_function_test 00:27:35.732 ************************************ 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.732 13:38:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:27:35.732 13:38:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:35.732 13:38:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:35.732 13:38:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:35.732 ************************************ 00:27:35.732 START TEST raid_state_function_test_sb 00:27:35.732 ************************************ 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:35.732 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84697 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84697' 00:27:35.733 Process raid pid: 84697 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84697 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84697 ']' 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.733 13:38:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:35.733 [2024-10-28 13:38:49.799038] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:27:35.733 [2024-10-28 13:38:49.799494] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.991 [2024-10-28 13:38:49.948465] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:35.991 [2024-10-28 13:38:49.979222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.991 [2024-10-28 13:38:50.050949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.991 [2024-10-28 13:38:50.132894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:35.991 [2024-10-28 13:38:50.132954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.926 [2024-10-28 13:38:50.834401] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:36.926 [2024-10-28 13:38:50.834464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:36.926 [2024-10-28 13:38:50.834483] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:36.926 [2024-10-28 13:38:50.834496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:36.926 [2024-10-28 13:38:50.834510] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:36.926 [2024-10-28 13:38:50.834520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:36.926 [2024-10-28 13:38:50.834532] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:36.926 [2024-10-28 13:38:50.834542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.926 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:36.926 "name": "Existed_Raid", 00:27:36.926 "uuid": "5cb8e06b-a066-4696-9ff6-2637b4aefc41", 00:27:36.926 "strip_size_kb": 64, 00:27:36.926 "state": "configuring", 00:27:36.926 "raid_level": "concat", 00:27:36.926 "superblock": true, 00:27:36.926 "num_base_bdevs": 4, 00:27:36.926 "num_base_bdevs_discovered": 0, 00:27:36.926 "num_base_bdevs_operational": 4, 00:27:36.926 "base_bdevs_list": [ 00:27:36.926 { 00:27:36.926 "name": "BaseBdev1", 00:27:36.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.926 "is_configured": false, 00:27:36.926 "data_offset": 0, 00:27:36.926 "data_size": 0 00:27:36.926 }, 00:27:36.926 { 00:27:36.926 "name": "BaseBdev2", 00:27:36.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.926 "is_configured": false, 00:27:36.927 "data_offset": 0, 00:27:36.927 "data_size": 0 00:27:36.927 }, 00:27:36.927 { 00:27:36.927 "name": "BaseBdev3", 00:27:36.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.927 "is_configured": false, 00:27:36.927 "data_offset": 0, 00:27:36.927 "data_size": 0 00:27:36.927 }, 00:27:36.927 { 00:27:36.927 "name": "BaseBdev4", 00:27:36.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.927 "is_configured": false, 00:27:36.927 "data_offset": 0, 00:27:36.927 "data_size": 0 00:27:36.927 } 00:27:36.927 ] 00:27:36.927 }' 00:27:36.927 13:38:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:36.927 13:38:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.493 [2024-10-28 13:38:51.382520] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:37.493 [2024-10-28 13:38:51.382585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.493 [2024-10-28 13:38:51.390595] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:37.493 [2024-10-28 13:38:51.390649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:37.493 [2024-10-28 13:38:51.390670] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:37.493 [2024-10-28 13:38:51.390684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:37.493 [2024-10-28 13:38:51.390696] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:37.493 [2024-10-28 13:38:51.390708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:37.493 [2024-10-28 13:38:51.390721] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:37.493 [2024-10-28 13:38:51.390733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.493 [2024-10-28 13:38:51.414404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:37.493 BaseBdev1 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.493 [ 00:27:37.493 { 00:27:37.493 "name": "BaseBdev1", 00:27:37.493 "aliases": [ 00:27:37.493 "a1e79950-e480-4122-b4b5-7eed7fb7445e" 00:27:37.493 ], 00:27:37.493 "product_name": "Malloc disk", 00:27:37.493 "block_size": 512, 00:27:37.493 "num_blocks": 65536, 00:27:37.493 "uuid": "a1e79950-e480-4122-b4b5-7eed7fb7445e", 00:27:37.493 "assigned_rate_limits": { 00:27:37.493 "rw_ios_per_sec": 0, 00:27:37.493 "rw_mbytes_per_sec": 0, 00:27:37.493 "r_mbytes_per_sec": 0, 00:27:37.493 "w_mbytes_per_sec": 0 00:27:37.493 }, 00:27:37.493 "claimed": true, 00:27:37.493 "claim_type": "exclusive_write", 00:27:37.493 "zoned": false, 00:27:37.493 "supported_io_types": { 00:27:37.493 "read": true, 00:27:37.493 "write": true, 00:27:37.493 "unmap": true, 00:27:37.493 "flush": true, 00:27:37.493 "reset": true, 00:27:37.493 "nvme_admin": false, 00:27:37.493 "nvme_io": false, 00:27:37.493 "nvme_io_md": false, 00:27:37.493 "write_zeroes": true, 00:27:37.493 "zcopy": true, 00:27:37.493 "get_zone_info": false, 00:27:37.493 "zone_management": false, 00:27:37.493 "zone_append": false, 00:27:37.493 "compare": false, 00:27:37.493 "compare_and_write": false, 00:27:37.493 "abort": true, 00:27:37.493 "seek_hole": false, 00:27:37.493 "seek_data": false, 00:27:37.493 "copy": true, 00:27:37.493 "nvme_iov_md": false 00:27:37.493 }, 00:27:37.493 "memory_domains": [ 00:27:37.493 { 00:27:37.493 "dma_device_id": "system", 00:27:37.493 "dma_device_type": 1 00:27:37.493 }, 00:27:37.493 { 00:27:37.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:37.493 "dma_device_type": 2 00:27:37.493 } 00:27:37.493 ], 00:27:37.493 "driver_specific": {} 00:27:37.493 } 00:27:37.493 ] 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.493 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:37.493 "name": "Existed_Raid", 00:27:37.493 "uuid": "da5ec601-b1d4-42d8-8808-c30071e1d0cb", 00:27:37.493 "strip_size_kb": 64, 00:27:37.493 "state": "configuring", 00:27:37.493 "raid_level": "concat", 00:27:37.493 "superblock": true, 00:27:37.494 "num_base_bdevs": 4, 00:27:37.494 "num_base_bdevs_discovered": 1, 00:27:37.494 "num_base_bdevs_operational": 4, 00:27:37.494 "base_bdevs_list": [ 00:27:37.494 { 00:27:37.494 "name": "BaseBdev1", 00:27:37.494 "uuid": "a1e79950-e480-4122-b4b5-7eed7fb7445e", 00:27:37.494 "is_configured": true, 00:27:37.494 "data_offset": 2048, 00:27:37.494 "data_size": 63488 00:27:37.494 }, 00:27:37.494 { 00:27:37.494 "name": "BaseBdev2", 00:27:37.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.494 "is_configured": false, 00:27:37.494 "data_offset": 0, 00:27:37.494 "data_size": 0 00:27:37.494 }, 00:27:37.494 { 00:27:37.494 "name": "BaseBdev3", 00:27:37.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.494 "is_configured": false, 00:27:37.494 "data_offset": 0, 00:27:37.494 "data_size": 0 00:27:37.494 }, 00:27:37.494 { 00:27:37.494 "name": "BaseBdev4", 00:27:37.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.494 "is_configured": false, 00:27:37.494 "data_offset": 0, 00:27:37.494 "data_size": 0 00:27:37.494 } 00:27:37.494 ] 00:27:37.494 }' 00:27:37.494 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:37.494 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.104 [2024-10-28 13:38:51.966639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:38.104 [2024-10-28 13:38:51.966910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.104 [2024-10-28 13:38:51.978669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:38.104 [2024-10-28 13:38:51.981591] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:38.104 [2024-10-28 13:38:51.981798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:38.104 [2024-10-28 13:38:51.981925] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:38.104 [2024-10-28 13:38:51.981981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:38.104 [2024-10-28 13:38:51.982107] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:38.104 [2024-10-28 13:38:51.982185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:38.104 13:38:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.104 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.104 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:38.104 "name": "Existed_Raid", 00:27:38.104 "uuid": "b9d0550d-a136-43e9-af78-4a6272233d01", 00:27:38.104 "strip_size_kb": 64, 00:27:38.104 "state": "configuring", 00:27:38.104 "raid_level": "concat", 00:27:38.104 "superblock": true, 00:27:38.104 "num_base_bdevs": 4, 00:27:38.104 "num_base_bdevs_discovered": 1, 00:27:38.104 "num_base_bdevs_operational": 4, 00:27:38.104 "base_bdevs_list": [ 00:27:38.104 { 00:27:38.104 "name": "BaseBdev1", 00:27:38.104 "uuid": "a1e79950-e480-4122-b4b5-7eed7fb7445e", 00:27:38.104 "is_configured": true, 00:27:38.104 "data_offset": 2048, 00:27:38.104 "data_size": 63488 00:27:38.104 }, 00:27:38.104 { 00:27:38.104 "name": "BaseBdev2", 00:27:38.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.104 "is_configured": false, 00:27:38.104 "data_offset": 0, 00:27:38.105 "data_size": 0 00:27:38.105 }, 00:27:38.105 { 00:27:38.105 "name": "BaseBdev3", 00:27:38.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.105 "is_configured": false, 00:27:38.105 "data_offset": 0, 00:27:38.105 "data_size": 0 00:27:38.105 }, 00:27:38.105 { 00:27:38.105 "name": "BaseBdev4", 00:27:38.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.105 "is_configured": false, 00:27:38.105 "data_offset": 0, 00:27:38.105 "data_size": 0 00:27:38.105 } 00:27:38.105 ] 00:27:38.105 }' 00:27:38.105 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:38.105 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.671 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:38.671 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.671 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.671 [2024-10-28 13:38:52.544730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:38.671 BaseBdev2 00:27:38.671 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.671 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:38.671 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:27:38.671 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:38.671 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:38.671 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:38.671 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:38.671 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:38.671 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.672 [ 00:27:38.672 { 00:27:38.672 "name": "BaseBdev2", 00:27:38.672 "aliases": [ 00:27:38.672 "e761348e-ee54-487f-92a0-bf8a6fe83c1d" 00:27:38.672 ], 00:27:38.672 "product_name": "Malloc disk", 00:27:38.672 "block_size": 512, 00:27:38.672 "num_blocks": 65536, 00:27:38.672 "uuid": "e761348e-ee54-487f-92a0-bf8a6fe83c1d", 00:27:38.672 "assigned_rate_limits": { 00:27:38.672 "rw_ios_per_sec": 0, 00:27:38.672 "rw_mbytes_per_sec": 0, 00:27:38.672 "r_mbytes_per_sec": 0, 00:27:38.672 "w_mbytes_per_sec": 0 00:27:38.672 }, 00:27:38.672 "claimed": true, 00:27:38.672 "claim_type": "exclusive_write", 00:27:38.672 "zoned": false, 00:27:38.672 "supported_io_types": { 00:27:38.672 "read": true, 00:27:38.672 "write": true, 00:27:38.672 "unmap": true, 00:27:38.672 "flush": true, 00:27:38.672 "reset": true, 00:27:38.672 "nvme_admin": false, 00:27:38.672 "nvme_io": false, 00:27:38.672 "nvme_io_md": false, 00:27:38.672 "write_zeroes": true, 00:27:38.672 "zcopy": true, 00:27:38.672 "get_zone_info": false, 00:27:38.672 "zone_management": false, 00:27:38.672 "zone_append": false, 00:27:38.672 "compare": false, 00:27:38.672 "compare_and_write": false, 00:27:38.672 "abort": true, 00:27:38.672 "seek_hole": false, 00:27:38.672 "seek_data": false, 00:27:38.672 "copy": true, 00:27:38.672 "nvme_iov_md": false 00:27:38.672 }, 00:27:38.672 "memory_domains": [ 00:27:38.672 { 00:27:38.672 "dma_device_id": "system", 00:27:38.672 "dma_device_type": 1 00:27:38.672 }, 00:27:38.672 { 00:27:38.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.672 "dma_device_type": 2 00:27:38.672 } 00:27:38.672 ], 00:27:38.672 "driver_specific": {} 00:27:38.672 } 00:27:38.672 ] 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:38.672 "name": "Existed_Raid", 00:27:38.672 "uuid": "b9d0550d-a136-43e9-af78-4a6272233d01", 00:27:38.672 "strip_size_kb": 64, 00:27:38.672 "state": "configuring", 00:27:38.672 "raid_level": "concat", 00:27:38.672 "superblock": true, 00:27:38.672 "num_base_bdevs": 4, 00:27:38.672 "num_base_bdevs_discovered": 2, 00:27:38.672 "num_base_bdevs_operational": 4, 00:27:38.672 "base_bdevs_list": [ 00:27:38.672 { 00:27:38.672 "name": "BaseBdev1", 00:27:38.672 "uuid": "a1e79950-e480-4122-b4b5-7eed7fb7445e", 00:27:38.672 "is_configured": true, 00:27:38.672 "data_offset": 2048, 00:27:38.672 "data_size": 63488 00:27:38.672 }, 00:27:38.672 { 00:27:38.672 "name": "BaseBdev2", 00:27:38.672 "uuid": "e761348e-ee54-487f-92a0-bf8a6fe83c1d", 00:27:38.672 "is_configured": true, 00:27:38.672 "data_offset": 2048, 00:27:38.672 "data_size": 63488 00:27:38.672 }, 00:27:38.672 { 00:27:38.672 "name": "BaseBdev3", 00:27:38.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.672 "is_configured": false, 00:27:38.672 "data_offset": 0, 00:27:38.672 "data_size": 0 00:27:38.672 }, 00:27:38.672 { 00:27:38.672 "name": "BaseBdev4", 00:27:38.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.672 "is_configured": false, 00:27:38.672 "data_offset": 0, 00:27:38.672 "data_size": 0 00:27:38.672 } 00:27:38.672 ] 00:27:38.672 }' 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:38.672 13:38:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.931 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:38.931 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.931 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.190 [2024-10-28 13:38:53.106779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:39.190 BaseBdev3 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.190 [ 00:27:39.190 { 00:27:39.190 "name": "BaseBdev3", 00:27:39.190 "aliases": [ 00:27:39.190 "69982402-740f-4d9b-a8ce-362bf09ac47a" 00:27:39.190 ], 00:27:39.190 "product_name": "Malloc disk", 00:27:39.190 "block_size": 512, 00:27:39.190 "num_blocks": 65536, 00:27:39.190 "uuid": "69982402-740f-4d9b-a8ce-362bf09ac47a", 00:27:39.190 "assigned_rate_limits": { 00:27:39.190 "rw_ios_per_sec": 0, 00:27:39.190 "rw_mbytes_per_sec": 0, 00:27:39.190 "r_mbytes_per_sec": 0, 00:27:39.190 "w_mbytes_per_sec": 0 00:27:39.190 }, 00:27:39.190 "claimed": true, 00:27:39.190 "claim_type": "exclusive_write", 00:27:39.190 "zoned": false, 00:27:39.190 "supported_io_types": { 00:27:39.190 "read": true, 00:27:39.190 "write": true, 00:27:39.190 "unmap": true, 00:27:39.190 "flush": true, 00:27:39.190 "reset": true, 00:27:39.190 "nvme_admin": false, 00:27:39.190 "nvme_io": false, 00:27:39.190 "nvme_io_md": false, 00:27:39.190 "write_zeroes": true, 00:27:39.190 "zcopy": true, 00:27:39.190 "get_zone_info": false, 00:27:39.190 "zone_management": false, 00:27:39.190 "zone_append": false, 00:27:39.190 "compare": false, 00:27:39.190 "compare_and_write": false, 00:27:39.190 "abort": true, 00:27:39.190 "seek_hole": false, 00:27:39.190 "seek_data": false, 00:27:39.190 "copy": true, 00:27:39.190 "nvme_iov_md": false 00:27:39.190 }, 00:27:39.190 "memory_domains": [ 00:27:39.190 { 00:27:39.190 "dma_device_id": "system", 00:27:39.190 "dma_device_type": 1 00:27:39.190 }, 00:27:39.190 { 00:27:39.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.190 "dma_device_type": 2 00:27:39.190 } 00:27:39.190 ], 00:27:39.190 "driver_specific": {} 00:27:39.190 } 00:27:39.190 ] 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:39.190 "name": "Existed_Raid", 00:27:39.190 "uuid": "b9d0550d-a136-43e9-af78-4a6272233d01", 00:27:39.190 "strip_size_kb": 64, 00:27:39.190 "state": "configuring", 00:27:39.190 "raid_level": "concat", 00:27:39.190 "superblock": true, 00:27:39.190 "num_base_bdevs": 4, 00:27:39.190 "num_base_bdevs_discovered": 3, 00:27:39.190 "num_base_bdevs_operational": 4, 00:27:39.190 "base_bdevs_list": [ 00:27:39.190 { 00:27:39.190 "name": "BaseBdev1", 00:27:39.190 "uuid": "a1e79950-e480-4122-b4b5-7eed7fb7445e", 00:27:39.190 "is_configured": true, 00:27:39.190 "data_offset": 2048, 00:27:39.190 "data_size": 63488 00:27:39.190 }, 00:27:39.190 { 00:27:39.190 "name": "BaseBdev2", 00:27:39.190 "uuid": "e761348e-ee54-487f-92a0-bf8a6fe83c1d", 00:27:39.190 "is_configured": true, 00:27:39.190 "data_offset": 2048, 00:27:39.190 "data_size": 63488 00:27:39.190 }, 00:27:39.190 { 00:27:39.190 "name": "BaseBdev3", 00:27:39.190 "uuid": "69982402-740f-4d9b-a8ce-362bf09ac47a", 00:27:39.190 "is_configured": true, 00:27:39.190 "data_offset": 2048, 00:27:39.190 "data_size": 63488 00:27:39.190 }, 00:27:39.190 { 00:27:39.190 "name": "BaseBdev4", 00:27:39.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.190 "is_configured": false, 00:27:39.190 "data_offset": 0, 00:27:39.190 "data_size": 0 00:27:39.190 } 00:27:39.190 ] 00:27:39.190 }' 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:39.190 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.758 [2024-10-28 13:38:53.663708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:39.758 [2024-10-28 13:38:53.664010] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:39.758 [2024-10-28 13:38:53.664035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:39.758 BaseBdev4 00:27:39.758 [2024-10-28 13:38:53.664398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:27:39.758 [2024-10-28 13:38:53.664586] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:39.758 [2024-10-28 13:38:53.664614] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:27:39.758 [2024-10-28 13:38:53.664769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.758 [ 00:27:39.758 { 00:27:39.758 "name": "BaseBdev4", 00:27:39.758 "aliases": [ 00:27:39.758 "a746649e-7aa9-492c-a5d5-8f57d78ece1e" 00:27:39.758 ], 00:27:39.758 "product_name": "Malloc disk", 00:27:39.758 "block_size": 512, 00:27:39.758 "num_blocks": 65536, 00:27:39.758 "uuid": "a746649e-7aa9-492c-a5d5-8f57d78ece1e", 00:27:39.758 "assigned_rate_limits": { 00:27:39.758 "rw_ios_per_sec": 0, 00:27:39.758 "rw_mbytes_per_sec": 0, 00:27:39.758 "r_mbytes_per_sec": 0, 00:27:39.758 "w_mbytes_per_sec": 0 00:27:39.758 }, 00:27:39.758 "claimed": true, 00:27:39.758 "claim_type": "exclusive_write", 00:27:39.758 "zoned": false, 00:27:39.758 "supported_io_types": { 00:27:39.758 "read": true, 00:27:39.758 "write": true, 00:27:39.758 "unmap": true, 00:27:39.758 "flush": true, 00:27:39.758 "reset": true, 00:27:39.758 "nvme_admin": false, 00:27:39.758 "nvme_io": false, 00:27:39.758 "nvme_io_md": false, 00:27:39.758 "write_zeroes": true, 00:27:39.758 "zcopy": true, 00:27:39.758 "get_zone_info": false, 00:27:39.758 "zone_management": false, 00:27:39.758 "zone_append": false, 00:27:39.758 "compare": false, 00:27:39.758 "compare_and_write": false, 00:27:39.758 "abort": true, 00:27:39.758 "seek_hole": false, 00:27:39.758 "seek_data": false, 00:27:39.758 "copy": true, 00:27:39.758 "nvme_iov_md": false 00:27:39.758 }, 00:27:39.758 "memory_domains": [ 00:27:39.758 { 00:27:39.758 "dma_device_id": "system", 00:27:39.758 "dma_device_type": 1 00:27:39.758 }, 00:27:39.758 { 00:27:39.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.758 "dma_device_type": 2 00:27:39.758 } 00:27:39.758 ], 00:27:39.758 "driver_specific": {} 00:27:39.758 } 00:27:39.758 ] 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:39.758 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:39.759 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:39.759 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:39.759 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:39.759 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:39.759 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:39.759 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:39.759 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:39.759 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:39.759 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.759 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.759 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.759 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:39.759 "name": "Existed_Raid", 00:27:39.759 "uuid": "b9d0550d-a136-43e9-af78-4a6272233d01", 00:27:39.759 "strip_size_kb": 64, 00:27:39.759 "state": "online", 00:27:39.759 "raid_level": "concat", 00:27:39.759 "superblock": true, 00:27:39.759 "num_base_bdevs": 4, 00:27:39.759 "num_base_bdevs_discovered": 4, 00:27:39.759 "num_base_bdevs_operational": 4, 00:27:39.759 "base_bdevs_list": [ 00:27:39.759 { 00:27:39.759 "name": "BaseBdev1", 00:27:39.759 "uuid": "a1e79950-e480-4122-b4b5-7eed7fb7445e", 00:27:39.759 "is_configured": true, 00:27:39.759 "data_offset": 2048, 00:27:39.759 "data_size": 63488 00:27:39.759 }, 00:27:39.759 { 00:27:39.759 "name": "BaseBdev2", 00:27:39.759 "uuid": "e761348e-ee54-487f-92a0-bf8a6fe83c1d", 00:27:39.759 "is_configured": true, 00:27:39.759 "data_offset": 2048, 00:27:39.759 "data_size": 63488 00:27:39.759 }, 00:27:39.759 { 00:27:39.759 "name": "BaseBdev3", 00:27:39.759 "uuid": "69982402-740f-4d9b-a8ce-362bf09ac47a", 00:27:39.759 "is_configured": true, 00:27:39.759 "data_offset": 2048, 00:27:39.759 "data_size": 63488 00:27:39.759 }, 00:27:39.759 { 00:27:39.759 "name": "BaseBdev4", 00:27:39.759 "uuid": "a746649e-7aa9-492c-a5d5-8f57d78ece1e", 00:27:39.759 "is_configured": true, 00:27:39.759 "data_offset": 2048, 00:27:39.759 "data_size": 63488 00:27:39.759 } 00:27:39.759 ] 00:27:39.759 }' 00:27:39.759 13:38:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:39.759 13:38:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.325 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:40.325 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:40.325 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:40.325 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:40.325 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.326 [2024-10-28 13:38:54.288608] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:40.326 "name": "Existed_Raid", 00:27:40.326 "aliases": [ 00:27:40.326 "b9d0550d-a136-43e9-af78-4a6272233d01" 00:27:40.326 ], 00:27:40.326 "product_name": "Raid Volume", 00:27:40.326 "block_size": 512, 00:27:40.326 "num_blocks": 253952, 00:27:40.326 "uuid": "b9d0550d-a136-43e9-af78-4a6272233d01", 00:27:40.326 "assigned_rate_limits": { 00:27:40.326 "rw_ios_per_sec": 0, 00:27:40.326 "rw_mbytes_per_sec": 0, 00:27:40.326 "r_mbytes_per_sec": 0, 00:27:40.326 "w_mbytes_per_sec": 0 00:27:40.326 }, 00:27:40.326 "claimed": false, 00:27:40.326 "zoned": false, 00:27:40.326 "supported_io_types": { 00:27:40.326 "read": true, 00:27:40.326 "write": true, 00:27:40.326 "unmap": true, 00:27:40.326 "flush": true, 00:27:40.326 "reset": true, 00:27:40.326 "nvme_admin": false, 00:27:40.326 "nvme_io": false, 00:27:40.326 "nvme_io_md": false, 00:27:40.326 "write_zeroes": true, 00:27:40.326 "zcopy": false, 00:27:40.326 "get_zone_info": false, 00:27:40.326 "zone_management": false, 00:27:40.326 "zone_append": false, 00:27:40.326 "compare": false, 00:27:40.326 "compare_and_write": false, 00:27:40.326 "abort": false, 00:27:40.326 "seek_hole": false, 00:27:40.326 "seek_data": false, 00:27:40.326 "copy": false, 00:27:40.326 "nvme_iov_md": false 00:27:40.326 }, 00:27:40.326 "memory_domains": [ 00:27:40.326 { 00:27:40.326 "dma_device_id": "system", 00:27:40.326 "dma_device_type": 1 00:27:40.326 }, 00:27:40.326 { 00:27:40.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.326 "dma_device_type": 2 00:27:40.326 }, 00:27:40.326 { 00:27:40.326 "dma_device_id": "system", 00:27:40.326 "dma_device_type": 1 00:27:40.326 }, 00:27:40.326 { 00:27:40.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.326 "dma_device_type": 2 00:27:40.326 }, 00:27:40.326 { 00:27:40.326 "dma_device_id": "system", 00:27:40.326 "dma_device_type": 1 00:27:40.326 }, 00:27:40.326 { 00:27:40.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.326 "dma_device_type": 2 00:27:40.326 }, 00:27:40.326 { 00:27:40.326 "dma_device_id": "system", 00:27:40.326 "dma_device_type": 1 00:27:40.326 }, 00:27:40.326 { 00:27:40.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.326 "dma_device_type": 2 00:27:40.326 } 00:27:40.326 ], 00:27:40.326 "driver_specific": { 00:27:40.326 "raid": { 00:27:40.326 "uuid": "b9d0550d-a136-43e9-af78-4a6272233d01", 00:27:40.326 "strip_size_kb": 64, 00:27:40.326 "state": "online", 00:27:40.326 "raid_level": "concat", 00:27:40.326 "superblock": true, 00:27:40.326 "num_base_bdevs": 4, 00:27:40.326 "num_base_bdevs_discovered": 4, 00:27:40.326 "num_base_bdevs_operational": 4, 00:27:40.326 "base_bdevs_list": [ 00:27:40.326 { 00:27:40.326 "name": "BaseBdev1", 00:27:40.326 "uuid": "a1e79950-e480-4122-b4b5-7eed7fb7445e", 00:27:40.326 "is_configured": true, 00:27:40.326 "data_offset": 2048, 00:27:40.326 "data_size": 63488 00:27:40.326 }, 00:27:40.326 { 00:27:40.326 "name": "BaseBdev2", 00:27:40.326 "uuid": "e761348e-ee54-487f-92a0-bf8a6fe83c1d", 00:27:40.326 "is_configured": true, 00:27:40.326 "data_offset": 2048, 00:27:40.326 "data_size": 63488 00:27:40.326 }, 00:27:40.326 { 00:27:40.326 "name": "BaseBdev3", 00:27:40.326 "uuid": "69982402-740f-4d9b-a8ce-362bf09ac47a", 00:27:40.326 "is_configured": true, 00:27:40.326 "data_offset": 2048, 00:27:40.326 "data_size": 63488 00:27:40.326 }, 00:27:40.326 { 00:27:40.326 "name": "BaseBdev4", 00:27:40.326 "uuid": "a746649e-7aa9-492c-a5d5-8f57d78ece1e", 00:27:40.326 "is_configured": true, 00:27:40.326 "data_offset": 2048, 00:27:40.326 "data_size": 63488 00:27:40.326 } 00:27:40.326 ] 00:27:40.326 } 00:27:40.326 } 00:27:40.326 }' 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:40.326 BaseBdev2 00:27:40.326 BaseBdev3 00:27:40.326 BaseBdev4' 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.326 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.585 [2024-10-28 13:38:54.704439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:40.585 [2024-10-28 13:38:54.704480] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:40.585 [2024-10-28 13:38:54.704607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.585 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.845 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.845 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:40.845 "name": "Existed_Raid", 00:27:40.845 "uuid": "b9d0550d-a136-43e9-af78-4a6272233d01", 00:27:40.845 "strip_size_kb": 64, 00:27:40.845 "state": "offline", 00:27:40.845 "raid_level": "concat", 00:27:40.845 "superblock": true, 00:27:40.845 "num_base_bdevs": 4, 00:27:40.845 "num_base_bdevs_discovered": 3, 00:27:40.845 "num_base_bdevs_operational": 3, 00:27:40.845 "base_bdevs_list": [ 00:27:40.845 { 00:27:40.845 "name": null, 00:27:40.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.845 "is_configured": false, 00:27:40.845 "data_offset": 0, 00:27:40.845 "data_size": 63488 00:27:40.845 }, 00:27:40.845 { 00:27:40.845 "name": "BaseBdev2", 00:27:40.845 "uuid": "e761348e-ee54-487f-92a0-bf8a6fe83c1d", 00:27:40.845 "is_configured": true, 00:27:40.845 "data_offset": 2048, 00:27:40.845 "data_size": 63488 00:27:40.845 }, 00:27:40.845 { 00:27:40.845 "name": "BaseBdev3", 00:27:40.845 "uuid": "69982402-740f-4d9b-a8ce-362bf09ac47a", 00:27:40.845 "is_configured": true, 00:27:40.845 "data_offset": 2048, 00:27:40.845 "data_size": 63488 00:27:40.845 }, 00:27:40.845 { 00:27:40.845 "name": "BaseBdev4", 00:27:40.845 "uuid": "a746649e-7aa9-492c-a5d5-8f57d78ece1e", 00:27:40.845 "is_configured": true, 00:27:40.845 "data_offset": 2048, 00:27:40.845 "data_size": 63488 00:27:40.845 } 00:27:40.845 ] 00:27:40.845 }' 00:27:40.845 13:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:40.845 13:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.412 [2024-10-28 13:38:55.333329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.412 [2024-10-28 13:38:55.410095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.412 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.412 [2024-10-28 13:38:55.486052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:41.412 [2024-10-28 13:38:55.486165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.413 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.672 BaseBdev2 00:27:41.672 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.672 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:27:41.672 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:27:41.672 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:41.672 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:41.672 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:41.672 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:41.672 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:41.672 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.672 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.672 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.672 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:41.672 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.672 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.672 [ 00:27:41.672 { 00:27:41.672 "name": "BaseBdev2", 00:27:41.672 "aliases": [ 00:27:41.672 "8ac6e6b5-8beb-4acc-ae56-2dcfc5ce53fc" 00:27:41.672 ], 00:27:41.672 "product_name": "Malloc disk", 00:27:41.672 "block_size": 512, 00:27:41.672 "num_blocks": 65536, 00:27:41.672 "uuid": "8ac6e6b5-8beb-4acc-ae56-2dcfc5ce53fc", 00:27:41.672 "assigned_rate_limits": { 00:27:41.672 "rw_ios_per_sec": 0, 00:27:41.672 "rw_mbytes_per_sec": 0, 00:27:41.672 "r_mbytes_per_sec": 0, 00:27:41.672 "w_mbytes_per_sec": 0 00:27:41.672 }, 00:27:41.672 "claimed": false, 00:27:41.672 "zoned": false, 00:27:41.672 "supported_io_types": { 00:27:41.672 "read": true, 00:27:41.673 "write": true, 00:27:41.673 "unmap": true, 00:27:41.673 "flush": true, 00:27:41.673 "reset": true, 00:27:41.673 "nvme_admin": false, 00:27:41.673 "nvme_io": false, 00:27:41.673 "nvme_io_md": false, 00:27:41.673 "write_zeroes": true, 00:27:41.673 "zcopy": true, 00:27:41.673 "get_zone_info": false, 00:27:41.673 "zone_management": false, 00:27:41.673 "zone_append": false, 00:27:41.673 "compare": false, 00:27:41.673 "compare_and_write": false, 00:27:41.673 "abort": true, 00:27:41.673 "seek_hole": false, 00:27:41.673 "seek_data": false, 00:27:41.673 "copy": true, 00:27:41.673 "nvme_iov_md": false 00:27:41.673 }, 00:27:41.673 "memory_domains": [ 00:27:41.673 { 00:27:41.673 "dma_device_id": "system", 00:27:41.673 "dma_device_type": 1 00:27:41.673 }, 00:27:41.673 { 00:27:41.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:41.673 "dma_device_type": 2 00:27:41.673 } 00:27:41.673 ], 00:27:41.673 "driver_specific": {} 00:27:41.673 } 00:27:41.673 ] 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.673 BaseBdev3 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.673 [ 00:27:41.673 { 00:27:41.673 "name": "BaseBdev3", 00:27:41.673 "aliases": [ 00:27:41.673 "f6f5233e-6bd6-4a70-a584-911aaf3b66aa" 00:27:41.673 ], 00:27:41.673 "product_name": "Malloc disk", 00:27:41.673 "block_size": 512, 00:27:41.673 "num_blocks": 65536, 00:27:41.673 "uuid": "f6f5233e-6bd6-4a70-a584-911aaf3b66aa", 00:27:41.673 "assigned_rate_limits": { 00:27:41.673 "rw_ios_per_sec": 0, 00:27:41.673 "rw_mbytes_per_sec": 0, 00:27:41.673 "r_mbytes_per_sec": 0, 00:27:41.673 "w_mbytes_per_sec": 0 00:27:41.673 }, 00:27:41.673 "claimed": false, 00:27:41.673 "zoned": false, 00:27:41.673 "supported_io_types": { 00:27:41.673 "read": true, 00:27:41.673 "write": true, 00:27:41.673 "unmap": true, 00:27:41.673 "flush": true, 00:27:41.673 "reset": true, 00:27:41.673 "nvme_admin": false, 00:27:41.673 "nvme_io": false, 00:27:41.673 "nvme_io_md": false, 00:27:41.673 "write_zeroes": true, 00:27:41.673 "zcopy": true, 00:27:41.673 "get_zone_info": false, 00:27:41.673 "zone_management": false, 00:27:41.673 "zone_append": false, 00:27:41.673 "compare": false, 00:27:41.673 "compare_and_write": false, 00:27:41.673 "abort": true, 00:27:41.673 "seek_hole": false, 00:27:41.673 "seek_data": false, 00:27:41.673 "copy": true, 00:27:41.673 "nvme_iov_md": false 00:27:41.673 }, 00:27:41.673 "memory_domains": [ 00:27:41.673 { 00:27:41.673 "dma_device_id": "system", 00:27:41.673 "dma_device_type": 1 00:27:41.673 }, 00:27:41.673 { 00:27:41.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:41.673 "dma_device_type": 2 00:27:41.673 } 00:27:41.673 ], 00:27:41.673 "driver_specific": {} 00:27:41.673 } 00:27:41.673 ] 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.673 BaseBdev4 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.673 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.674 [ 00:27:41.674 { 00:27:41.674 "name": "BaseBdev4", 00:27:41.674 "aliases": [ 00:27:41.674 "1b9cb504-5c96-4266-8a83-bafc58d2ca6e" 00:27:41.674 ], 00:27:41.674 "product_name": "Malloc disk", 00:27:41.674 "block_size": 512, 00:27:41.674 "num_blocks": 65536, 00:27:41.674 "uuid": "1b9cb504-5c96-4266-8a83-bafc58d2ca6e", 00:27:41.674 "assigned_rate_limits": { 00:27:41.674 "rw_ios_per_sec": 0, 00:27:41.674 "rw_mbytes_per_sec": 0, 00:27:41.674 "r_mbytes_per_sec": 0, 00:27:41.674 "w_mbytes_per_sec": 0 00:27:41.674 }, 00:27:41.674 "claimed": false, 00:27:41.674 "zoned": false, 00:27:41.674 "supported_io_types": { 00:27:41.674 "read": true, 00:27:41.674 "write": true, 00:27:41.674 "unmap": true, 00:27:41.674 "flush": true, 00:27:41.674 "reset": true, 00:27:41.674 "nvme_admin": false, 00:27:41.674 "nvme_io": false, 00:27:41.674 "nvme_io_md": false, 00:27:41.674 "write_zeroes": true, 00:27:41.674 "zcopy": true, 00:27:41.674 "get_zone_info": false, 00:27:41.674 "zone_management": false, 00:27:41.674 "zone_append": false, 00:27:41.674 "compare": false, 00:27:41.674 "compare_and_write": false, 00:27:41.674 "abort": true, 00:27:41.674 "seek_hole": false, 00:27:41.674 "seek_data": false, 00:27:41.674 "copy": true, 00:27:41.674 "nvme_iov_md": false 00:27:41.674 }, 00:27:41.674 "memory_domains": [ 00:27:41.674 { 00:27:41.674 "dma_device_id": "system", 00:27:41.674 "dma_device_type": 1 00:27:41.674 }, 00:27:41.674 { 00:27:41.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:41.674 "dma_device_type": 2 00:27:41.674 } 00:27:41.674 ], 00:27:41.674 "driver_specific": {} 00:27:41.674 } 00:27:41.674 ] 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.674 [2024-10-28 13:38:55.734923] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:41.674 [2024-10-28 13:38:55.735129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:41.674 [2024-10-28 13:38:55.735185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:41.674 [2024-10-28 13:38:55.738008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:41.674 [2024-10-28 13:38:55.738078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:41.674 "name": "Existed_Raid", 00:27:41.674 "uuid": "d170c1dc-02d6-412e-8ae1-4537f3a34189", 00:27:41.674 "strip_size_kb": 64, 00:27:41.674 "state": "configuring", 00:27:41.674 "raid_level": "concat", 00:27:41.674 "superblock": true, 00:27:41.674 "num_base_bdevs": 4, 00:27:41.674 "num_base_bdevs_discovered": 3, 00:27:41.674 "num_base_bdevs_operational": 4, 00:27:41.674 "base_bdevs_list": [ 00:27:41.674 { 00:27:41.674 "name": "BaseBdev1", 00:27:41.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:41.674 "is_configured": false, 00:27:41.674 "data_offset": 0, 00:27:41.674 "data_size": 0 00:27:41.674 }, 00:27:41.674 { 00:27:41.674 "name": "BaseBdev2", 00:27:41.674 "uuid": "8ac6e6b5-8beb-4acc-ae56-2dcfc5ce53fc", 00:27:41.674 "is_configured": true, 00:27:41.674 "data_offset": 2048, 00:27:41.674 "data_size": 63488 00:27:41.674 }, 00:27:41.674 { 00:27:41.674 "name": "BaseBdev3", 00:27:41.674 "uuid": "f6f5233e-6bd6-4a70-a584-911aaf3b66aa", 00:27:41.674 "is_configured": true, 00:27:41.674 "data_offset": 2048, 00:27:41.674 "data_size": 63488 00:27:41.674 }, 00:27:41.674 { 00:27:41.674 "name": "BaseBdev4", 00:27:41.674 "uuid": "1b9cb504-5c96-4266-8a83-bafc58d2ca6e", 00:27:41.674 "is_configured": true, 00:27:41.674 "data_offset": 2048, 00:27:41.674 "data_size": 63488 00:27:41.674 } 00:27:41.674 ] 00:27:41.674 }' 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:41.674 13:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.246 [2024-10-28 13:38:56.271337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:42.246 "name": "Existed_Raid", 00:27:42.246 "uuid": "d170c1dc-02d6-412e-8ae1-4537f3a34189", 00:27:42.246 "strip_size_kb": 64, 00:27:42.246 "state": "configuring", 00:27:42.246 "raid_level": "concat", 00:27:42.246 "superblock": true, 00:27:42.246 "num_base_bdevs": 4, 00:27:42.246 "num_base_bdevs_discovered": 2, 00:27:42.246 "num_base_bdevs_operational": 4, 00:27:42.246 "base_bdevs_list": [ 00:27:42.246 { 00:27:42.246 "name": "BaseBdev1", 00:27:42.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.246 "is_configured": false, 00:27:42.246 "data_offset": 0, 00:27:42.246 "data_size": 0 00:27:42.246 }, 00:27:42.246 { 00:27:42.246 "name": null, 00:27:42.246 "uuid": "8ac6e6b5-8beb-4acc-ae56-2dcfc5ce53fc", 00:27:42.246 "is_configured": false, 00:27:42.246 "data_offset": 0, 00:27:42.246 "data_size": 63488 00:27:42.246 }, 00:27:42.246 { 00:27:42.246 "name": "BaseBdev3", 00:27:42.246 "uuid": "f6f5233e-6bd6-4a70-a584-911aaf3b66aa", 00:27:42.246 "is_configured": true, 00:27:42.246 "data_offset": 2048, 00:27:42.246 "data_size": 63488 00:27:42.246 }, 00:27:42.246 { 00:27:42.246 "name": "BaseBdev4", 00:27:42.246 "uuid": "1b9cb504-5c96-4266-8a83-bafc58d2ca6e", 00:27:42.246 "is_configured": true, 00:27:42.246 "data_offset": 2048, 00:27:42.246 "data_size": 63488 00:27:42.246 } 00:27:42.246 ] 00:27:42.246 }' 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:42.246 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.845 [2024-10-28 13:38:56.941377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:42.845 BaseBdev1 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.845 [ 00:27:42.845 { 00:27:42.845 "name": "BaseBdev1", 00:27:42.845 "aliases": [ 00:27:42.845 "1c9c5795-2257-4e38-88d7-34206a42adeb" 00:27:42.845 ], 00:27:42.845 "product_name": "Malloc disk", 00:27:42.845 "block_size": 512, 00:27:42.845 "num_blocks": 65536, 00:27:42.845 "uuid": "1c9c5795-2257-4e38-88d7-34206a42adeb", 00:27:42.845 "assigned_rate_limits": { 00:27:42.845 "rw_ios_per_sec": 0, 00:27:42.845 "rw_mbytes_per_sec": 0, 00:27:42.845 "r_mbytes_per_sec": 0, 00:27:42.845 "w_mbytes_per_sec": 0 00:27:42.845 }, 00:27:42.845 "claimed": true, 00:27:42.845 "claim_type": "exclusive_write", 00:27:42.845 "zoned": false, 00:27:42.845 "supported_io_types": { 00:27:42.845 "read": true, 00:27:42.845 "write": true, 00:27:42.845 "unmap": true, 00:27:42.845 "flush": true, 00:27:42.845 "reset": true, 00:27:42.845 "nvme_admin": false, 00:27:42.845 "nvme_io": false, 00:27:42.845 "nvme_io_md": false, 00:27:42.845 "write_zeroes": true, 00:27:42.845 "zcopy": true, 00:27:42.845 "get_zone_info": false, 00:27:42.845 "zone_management": false, 00:27:42.845 "zone_append": false, 00:27:42.845 "compare": false, 00:27:42.845 "compare_and_write": false, 00:27:42.845 "abort": true, 00:27:42.845 "seek_hole": false, 00:27:42.845 "seek_data": false, 00:27:42.845 "copy": true, 00:27:42.845 "nvme_iov_md": false 00:27:42.845 }, 00:27:42.845 "memory_domains": [ 00:27:42.845 { 00:27:42.845 "dma_device_id": "system", 00:27:42.845 "dma_device_type": 1 00:27:42.845 }, 00:27:42.845 { 00:27:42.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:42.845 "dma_device_type": 2 00:27:42.845 } 00:27:42.845 ], 00:27:42.845 "driver_specific": {} 00:27:42.845 } 00:27:42.845 ] 00:27:42.845 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.846 13:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.105 13:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.105 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:43.105 "name": "Existed_Raid", 00:27:43.105 "uuid": "d170c1dc-02d6-412e-8ae1-4537f3a34189", 00:27:43.105 "strip_size_kb": 64, 00:27:43.105 "state": "configuring", 00:27:43.105 "raid_level": "concat", 00:27:43.105 "superblock": true, 00:27:43.105 "num_base_bdevs": 4, 00:27:43.105 "num_base_bdevs_discovered": 3, 00:27:43.105 "num_base_bdevs_operational": 4, 00:27:43.105 "base_bdevs_list": [ 00:27:43.105 { 00:27:43.105 "name": "BaseBdev1", 00:27:43.105 "uuid": "1c9c5795-2257-4e38-88d7-34206a42adeb", 00:27:43.105 "is_configured": true, 00:27:43.105 "data_offset": 2048, 00:27:43.105 "data_size": 63488 00:27:43.105 }, 00:27:43.105 { 00:27:43.105 "name": null, 00:27:43.105 "uuid": "8ac6e6b5-8beb-4acc-ae56-2dcfc5ce53fc", 00:27:43.105 "is_configured": false, 00:27:43.105 "data_offset": 0, 00:27:43.105 "data_size": 63488 00:27:43.105 }, 00:27:43.105 { 00:27:43.105 "name": "BaseBdev3", 00:27:43.105 "uuid": "f6f5233e-6bd6-4a70-a584-911aaf3b66aa", 00:27:43.105 "is_configured": true, 00:27:43.105 "data_offset": 2048, 00:27:43.105 "data_size": 63488 00:27:43.105 }, 00:27:43.105 { 00:27:43.105 "name": "BaseBdev4", 00:27:43.105 "uuid": "1b9cb504-5c96-4266-8a83-bafc58d2ca6e", 00:27:43.105 "is_configured": true, 00:27:43.105 "data_offset": 2048, 00:27:43.105 "data_size": 63488 00:27:43.105 } 00:27:43.105 ] 00:27:43.105 }' 00:27:43.105 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:43.105 13:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.364 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:43.365 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:43.365 13:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.365 13:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.365 13:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.624 [2024-10-28 13:38:57.553700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:43.624 "name": "Existed_Raid", 00:27:43.624 "uuid": "d170c1dc-02d6-412e-8ae1-4537f3a34189", 00:27:43.624 "strip_size_kb": 64, 00:27:43.624 "state": "configuring", 00:27:43.624 "raid_level": "concat", 00:27:43.624 "superblock": true, 00:27:43.624 "num_base_bdevs": 4, 00:27:43.624 "num_base_bdevs_discovered": 2, 00:27:43.624 "num_base_bdevs_operational": 4, 00:27:43.624 "base_bdevs_list": [ 00:27:43.624 { 00:27:43.624 "name": "BaseBdev1", 00:27:43.624 "uuid": "1c9c5795-2257-4e38-88d7-34206a42adeb", 00:27:43.624 "is_configured": true, 00:27:43.624 "data_offset": 2048, 00:27:43.624 "data_size": 63488 00:27:43.624 }, 00:27:43.624 { 00:27:43.624 "name": null, 00:27:43.624 "uuid": "8ac6e6b5-8beb-4acc-ae56-2dcfc5ce53fc", 00:27:43.624 "is_configured": false, 00:27:43.624 "data_offset": 0, 00:27:43.624 "data_size": 63488 00:27:43.624 }, 00:27:43.624 { 00:27:43.624 "name": null, 00:27:43.624 "uuid": "f6f5233e-6bd6-4a70-a584-911aaf3b66aa", 00:27:43.624 "is_configured": false, 00:27:43.624 "data_offset": 0, 00:27:43.624 "data_size": 63488 00:27:43.624 }, 00:27:43.624 { 00:27:43.624 "name": "BaseBdev4", 00:27:43.624 "uuid": "1b9cb504-5c96-4266-8a83-bafc58d2ca6e", 00:27:43.624 "is_configured": true, 00:27:43.624 "data_offset": 2048, 00:27:43.624 "data_size": 63488 00:27:43.624 } 00:27:43.624 ] 00:27:43.624 }' 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:43.624 13:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:44.192 [2024-10-28 13:38:58.149959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:44.192 "name": "Existed_Raid", 00:27:44.192 "uuid": "d170c1dc-02d6-412e-8ae1-4537f3a34189", 00:27:44.192 "strip_size_kb": 64, 00:27:44.192 "state": "configuring", 00:27:44.192 "raid_level": "concat", 00:27:44.192 "superblock": true, 00:27:44.192 "num_base_bdevs": 4, 00:27:44.192 "num_base_bdevs_discovered": 3, 00:27:44.192 "num_base_bdevs_operational": 4, 00:27:44.192 "base_bdevs_list": [ 00:27:44.192 { 00:27:44.192 "name": "BaseBdev1", 00:27:44.192 "uuid": "1c9c5795-2257-4e38-88d7-34206a42adeb", 00:27:44.192 "is_configured": true, 00:27:44.192 "data_offset": 2048, 00:27:44.192 "data_size": 63488 00:27:44.192 }, 00:27:44.192 { 00:27:44.192 "name": null, 00:27:44.192 "uuid": "8ac6e6b5-8beb-4acc-ae56-2dcfc5ce53fc", 00:27:44.192 "is_configured": false, 00:27:44.192 "data_offset": 0, 00:27:44.192 "data_size": 63488 00:27:44.192 }, 00:27:44.192 { 00:27:44.192 "name": "BaseBdev3", 00:27:44.192 "uuid": "f6f5233e-6bd6-4a70-a584-911aaf3b66aa", 00:27:44.192 "is_configured": true, 00:27:44.192 "data_offset": 2048, 00:27:44.192 "data_size": 63488 00:27:44.192 }, 00:27:44.192 { 00:27:44.192 "name": "BaseBdev4", 00:27:44.192 "uuid": "1b9cb504-5c96-4266-8a83-bafc58d2ca6e", 00:27:44.192 "is_configured": true, 00:27:44.192 "data_offset": 2048, 00:27:44.192 "data_size": 63488 00:27:44.192 } 00:27:44.192 ] 00:27:44.192 }' 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:44.192 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:44.759 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.759 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.759 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:44.759 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:44.760 [2024-10-28 13:38:58.734087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:44.760 "name": "Existed_Raid", 00:27:44.760 "uuid": "d170c1dc-02d6-412e-8ae1-4537f3a34189", 00:27:44.760 "strip_size_kb": 64, 00:27:44.760 "state": "configuring", 00:27:44.760 "raid_level": "concat", 00:27:44.760 "superblock": true, 00:27:44.760 "num_base_bdevs": 4, 00:27:44.760 "num_base_bdevs_discovered": 2, 00:27:44.760 "num_base_bdevs_operational": 4, 00:27:44.760 "base_bdevs_list": [ 00:27:44.760 { 00:27:44.760 "name": null, 00:27:44.760 "uuid": "1c9c5795-2257-4e38-88d7-34206a42adeb", 00:27:44.760 "is_configured": false, 00:27:44.760 "data_offset": 0, 00:27:44.760 "data_size": 63488 00:27:44.760 }, 00:27:44.760 { 00:27:44.760 "name": null, 00:27:44.760 "uuid": "8ac6e6b5-8beb-4acc-ae56-2dcfc5ce53fc", 00:27:44.760 "is_configured": false, 00:27:44.760 "data_offset": 0, 00:27:44.760 "data_size": 63488 00:27:44.760 }, 00:27:44.760 { 00:27:44.760 "name": "BaseBdev3", 00:27:44.760 "uuid": "f6f5233e-6bd6-4a70-a584-911aaf3b66aa", 00:27:44.760 "is_configured": true, 00:27:44.760 "data_offset": 2048, 00:27:44.760 "data_size": 63488 00:27:44.760 }, 00:27:44.760 { 00:27:44.760 "name": "BaseBdev4", 00:27:44.760 "uuid": "1b9cb504-5c96-4266-8a83-bafc58d2ca6e", 00:27:44.760 "is_configured": true, 00:27:44.760 "data_offset": 2048, 00:27:44.760 "data_size": 63488 00:27:44.760 } 00:27:44.760 ] 00:27:44.760 }' 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:44.760 13:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:45.328 [2024-10-28 13:38:59.374059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:45.328 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:45.329 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:45.329 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:45.329 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.329 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:45.329 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.329 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:45.329 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.329 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:45.329 "name": "Existed_Raid", 00:27:45.329 "uuid": "d170c1dc-02d6-412e-8ae1-4537f3a34189", 00:27:45.329 "strip_size_kb": 64, 00:27:45.329 "state": "configuring", 00:27:45.329 "raid_level": "concat", 00:27:45.329 "superblock": true, 00:27:45.329 "num_base_bdevs": 4, 00:27:45.329 "num_base_bdevs_discovered": 3, 00:27:45.329 "num_base_bdevs_operational": 4, 00:27:45.329 "base_bdevs_list": [ 00:27:45.329 { 00:27:45.329 "name": null, 00:27:45.329 "uuid": "1c9c5795-2257-4e38-88d7-34206a42adeb", 00:27:45.329 "is_configured": false, 00:27:45.329 "data_offset": 0, 00:27:45.329 "data_size": 63488 00:27:45.329 }, 00:27:45.329 { 00:27:45.329 "name": "BaseBdev2", 00:27:45.329 "uuid": "8ac6e6b5-8beb-4acc-ae56-2dcfc5ce53fc", 00:27:45.329 "is_configured": true, 00:27:45.329 "data_offset": 2048, 00:27:45.329 "data_size": 63488 00:27:45.329 }, 00:27:45.329 { 00:27:45.329 "name": "BaseBdev3", 00:27:45.329 "uuid": "f6f5233e-6bd6-4a70-a584-911aaf3b66aa", 00:27:45.329 "is_configured": true, 00:27:45.329 "data_offset": 2048, 00:27:45.329 "data_size": 63488 00:27:45.329 }, 00:27:45.329 { 00:27:45.329 "name": "BaseBdev4", 00:27:45.329 "uuid": "1b9cb504-5c96-4266-8a83-bafc58d2ca6e", 00:27:45.329 "is_configured": true, 00:27:45.329 "data_offset": 2048, 00:27:45.329 "data_size": 63488 00:27:45.329 } 00:27:45.329 ] 00:27:45.329 }' 00:27:45.329 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:45.329 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:45.898 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.898 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:45.898 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.898 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:45.898 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.898 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:27:45.898 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:45.898 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.898 13:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:45.898 13:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:45.898 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.898 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1c9c5795-2257-4e38-88d7-34206a42adeb 00:27:45.898 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.899 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:45.899 [2024-10-28 13:39:00.053876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:45.899 [2024-10-28 13:39:00.054307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:45.899 [2024-10-28 13:39:00.054346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:46.157 NewBaseBdev 00:27:46.157 [2024-10-28 13:39:00.054761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:27:46.157 [2024-10-28 13:39:00.054985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:46.157 [2024-10-28 13:39:00.055009] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:46.157 [2024-10-28 13:39:00.055245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:46.157 [ 00:27:46.157 { 00:27:46.157 "name": "NewBaseBdev", 00:27:46.157 "aliases": [ 00:27:46.157 "1c9c5795-2257-4e38-88d7-34206a42adeb" 00:27:46.157 ], 00:27:46.157 "product_name": "Malloc disk", 00:27:46.157 "block_size": 512, 00:27:46.157 "num_blocks": 65536, 00:27:46.157 "uuid": "1c9c5795-2257-4e38-88d7-34206a42adeb", 00:27:46.157 "assigned_rate_limits": { 00:27:46.157 "rw_ios_per_sec": 0, 00:27:46.157 "rw_mbytes_per_sec": 0, 00:27:46.157 "r_mbytes_per_sec": 0, 00:27:46.157 "w_mbytes_per_sec": 0 00:27:46.157 }, 00:27:46.157 "claimed": true, 00:27:46.157 "claim_type": "exclusive_write", 00:27:46.157 "zoned": false, 00:27:46.157 "supported_io_types": { 00:27:46.157 "read": true, 00:27:46.157 "write": true, 00:27:46.157 "unmap": true, 00:27:46.157 "flush": true, 00:27:46.157 "reset": true, 00:27:46.157 "nvme_admin": false, 00:27:46.157 "nvme_io": false, 00:27:46.157 "nvme_io_md": false, 00:27:46.157 "write_zeroes": true, 00:27:46.157 "zcopy": true, 00:27:46.157 "get_zone_info": false, 00:27:46.157 "zone_management": false, 00:27:46.157 "zone_append": false, 00:27:46.157 "compare": false, 00:27:46.157 "compare_and_write": false, 00:27:46.157 "abort": true, 00:27:46.157 "seek_hole": false, 00:27:46.157 "seek_data": false, 00:27:46.157 "copy": true, 00:27:46.157 "nvme_iov_md": false 00:27:46.157 }, 00:27:46.157 "memory_domains": [ 00:27:46.157 { 00:27:46.157 "dma_device_id": "system", 00:27:46.157 "dma_device_type": 1 00:27:46.157 }, 00:27:46.157 { 00:27:46.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.157 "dma_device_type": 2 00:27:46.157 } 00:27:46.157 ], 00:27:46.157 "driver_specific": {} 00:27:46.157 } 00:27:46.157 ] 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:46.157 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.158 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:46.158 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.158 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:46.158 "name": "Existed_Raid", 00:27:46.158 "uuid": "d170c1dc-02d6-412e-8ae1-4537f3a34189", 00:27:46.158 "strip_size_kb": 64, 00:27:46.158 "state": "online", 00:27:46.158 "raid_level": "concat", 00:27:46.158 "superblock": true, 00:27:46.158 "num_base_bdevs": 4, 00:27:46.158 "num_base_bdevs_discovered": 4, 00:27:46.158 "num_base_bdevs_operational": 4, 00:27:46.158 "base_bdevs_list": [ 00:27:46.158 { 00:27:46.158 "name": "NewBaseBdev", 00:27:46.158 "uuid": "1c9c5795-2257-4e38-88d7-34206a42adeb", 00:27:46.158 "is_configured": true, 00:27:46.158 "data_offset": 2048, 00:27:46.158 "data_size": 63488 00:27:46.158 }, 00:27:46.158 { 00:27:46.158 "name": "BaseBdev2", 00:27:46.158 "uuid": "8ac6e6b5-8beb-4acc-ae56-2dcfc5ce53fc", 00:27:46.158 "is_configured": true, 00:27:46.158 "data_offset": 2048, 00:27:46.158 "data_size": 63488 00:27:46.158 }, 00:27:46.158 { 00:27:46.158 "name": "BaseBdev3", 00:27:46.158 "uuid": "f6f5233e-6bd6-4a70-a584-911aaf3b66aa", 00:27:46.158 "is_configured": true, 00:27:46.158 "data_offset": 2048, 00:27:46.158 "data_size": 63488 00:27:46.158 }, 00:27:46.158 { 00:27:46.158 "name": "BaseBdev4", 00:27:46.158 "uuid": "1b9cb504-5c96-4266-8a83-bafc58d2ca6e", 00:27:46.158 "is_configured": true, 00:27:46.158 "data_offset": 2048, 00:27:46.158 "data_size": 63488 00:27:46.158 } 00:27:46.158 ] 00:27:46.158 }' 00:27:46.158 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:46.158 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:46.747 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:27:46.747 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:46.747 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:46.747 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:46.747 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:27:46.747 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:46.747 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:46.747 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.747 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:46.747 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:46.747 [2024-10-28 13:39:00.638612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:46.747 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.747 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:46.747 "name": "Existed_Raid", 00:27:46.747 "aliases": [ 00:27:46.747 "d170c1dc-02d6-412e-8ae1-4537f3a34189" 00:27:46.747 ], 00:27:46.747 "product_name": "Raid Volume", 00:27:46.747 "block_size": 512, 00:27:46.747 "num_blocks": 253952, 00:27:46.747 "uuid": "d170c1dc-02d6-412e-8ae1-4537f3a34189", 00:27:46.747 "assigned_rate_limits": { 00:27:46.747 "rw_ios_per_sec": 0, 00:27:46.747 "rw_mbytes_per_sec": 0, 00:27:46.747 "r_mbytes_per_sec": 0, 00:27:46.747 "w_mbytes_per_sec": 0 00:27:46.747 }, 00:27:46.747 "claimed": false, 00:27:46.747 "zoned": false, 00:27:46.747 "supported_io_types": { 00:27:46.747 "read": true, 00:27:46.747 "write": true, 00:27:46.747 "unmap": true, 00:27:46.747 "flush": true, 00:27:46.747 "reset": true, 00:27:46.747 "nvme_admin": false, 00:27:46.747 "nvme_io": false, 00:27:46.747 "nvme_io_md": false, 00:27:46.747 "write_zeroes": true, 00:27:46.747 "zcopy": false, 00:27:46.747 "get_zone_info": false, 00:27:46.747 "zone_management": false, 00:27:46.747 "zone_append": false, 00:27:46.747 "compare": false, 00:27:46.747 "compare_and_write": false, 00:27:46.747 "abort": false, 00:27:46.747 "seek_hole": false, 00:27:46.747 "seek_data": false, 00:27:46.747 "copy": false, 00:27:46.747 "nvme_iov_md": false 00:27:46.747 }, 00:27:46.747 "memory_domains": [ 00:27:46.747 { 00:27:46.747 "dma_device_id": "system", 00:27:46.747 "dma_device_type": 1 00:27:46.747 }, 00:27:46.747 { 00:27:46.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.747 "dma_device_type": 2 00:27:46.747 }, 00:27:46.747 { 00:27:46.747 "dma_device_id": "system", 00:27:46.747 "dma_device_type": 1 00:27:46.747 }, 00:27:46.747 { 00:27:46.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.747 "dma_device_type": 2 00:27:46.747 }, 00:27:46.747 { 00:27:46.747 "dma_device_id": "system", 00:27:46.747 "dma_device_type": 1 00:27:46.747 }, 00:27:46.747 { 00:27:46.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.747 "dma_device_type": 2 00:27:46.747 }, 00:27:46.747 { 00:27:46.747 "dma_device_id": "system", 00:27:46.747 "dma_device_type": 1 00:27:46.747 }, 00:27:46.747 { 00:27:46.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.747 "dma_device_type": 2 00:27:46.747 } 00:27:46.747 ], 00:27:46.747 "driver_specific": { 00:27:46.747 "raid": { 00:27:46.747 "uuid": "d170c1dc-02d6-412e-8ae1-4537f3a34189", 00:27:46.747 "strip_size_kb": 64, 00:27:46.747 "state": "online", 00:27:46.747 "raid_level": "concat", 00:27:46.747 "superblock": true, 00:27:46.747 "num_base_bdevs": 4, 00:27:46.747 "num_base_bdevs_discovered": 4, 00:27:46.747 "num_base_bdevs_operational": 4, 00:27:46.747 "base_bdevs_list": [ 00:27:46.747 { 00:27:46.747 "name": "NewBaseBdev", 00:27:46.747 "uuid": "1c9c5795-2257-4e38-88d7-34206a42adeb", 00:27:46.747 "is_configured": true, 00:27:46.747 "data_offset": 2048, 00:27:46.747 "data_size": 63488 00:27:46.747 }, 00:27:46.747 { 00:27:46.747 "name": "BaseBdev2", 00:27:46.747 "uuid": "8ac6e6b5-8beb-4acc-ae56-2dcfc5ce53fc", 00:27:46.748 "is_configured": true, 00:27:46.748 "data_offset": 2048, 00:27:46.748 "data_size": 63488 00:27:46.748 }, 00:27:46.748 { 00:27:46.748 "name": "BaseBdev3", 00:27:46.748 "uuid": "f6f5233e-6bd6-4a70-a584-911aaf3b66aa", 00:27:46.748 "is_configured": true, 00:27:46.748 "data_offset": 2048, 00:27:46.748 "data_size": 63488 00:27:46.748 }, 00:27:46.748 { 00:27:46.748 "name": "BaseBdev4", 00:27:46.748 "uuid": "1b9cb504-5c96-4266-8a83-bafc58d2ca6e", 00:27:46.748 "is_configured": true, 00:27:46.748 "data_offset": 2048, 00:27:46.748 "data_size": 63488 00:27:46.748 } 00:27:46.748 ] 00:27:46.748 } 00:27:46.748 } 00:27:46.748 }' 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:27:46.748 BaseBdev2 00:27:46.748 BaseBdev3 00:27:46.748 BaseBdev4' 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:46.748 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:47.007 13:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.007 13:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:47.007 13:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:47.007 13:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:47.007 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.007 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:47.007 [2024-10-28 13:39:01.054186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:47.007 [2024-10-28 13:39:01.054235] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:47.008 [2024-10-28 13:39:01.054392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:47.008 [2024-10-28 13:39:01.054517] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:47.008 [2024-10-28 13:39:01.054578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:47.008 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.008 13:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84697 00:27:47.008 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84697 ']' 00:27:47.008 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84697 00:27:47.008 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:27:47.008 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:47.008 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84697 00:27:47.008 killing process with pid 84697 00:27:47.008 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:47.008 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:47.008 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84697' 00:27:47.008 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84697 00:27:47.008 [2024-10-28 13:39:01.095714] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:47.008 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84697 00:27:47.008 [2024-10-28 13:39:01.152002] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:47.576 13:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:27:47.576 00:27:47.576 real 0m11.785s 00:27:47.576 user 0m20.604s 00:27:47.576 sys 0m1.895s 00:27:47.576 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:47.576 ************************************ 00:27:47.576 END TEST raid_state_function_test_sb 00:27:47.576 ************************************ 00:27:47.576 13:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:47.576 13:39:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:27:47.576 13:39:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:47.576 13:39:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:47.576 13:39:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:47.576 ************************************ 00:27:47.576 START TEST raid_superblock_test 00:27:47.576 ************************************ 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85376 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85376 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:47.576 13:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85376 ']' 00:27:47.577 13:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.577 13:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:47.577 13:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.577 13:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:47.577 13:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.577 [2024-10-28 13:39:01.659774] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:27:47.577 [2024-10-28 13:39:01.660017] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85376 ] 00:27:47.836 [2024-10-28 13:39:01.822565] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:47.836 [2024-10-28 13:39:01.855730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.836 [2024-10-28 13:39:01.927465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.094 [2024-10-28 13:39:02.021844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:48.094 [2024-10-28 13:39:02.021917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.663 malloc1 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.663 [2024-10-28 13:39:02.675326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:48.663 [2024-10-28 13:39:02.675414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.663 [2024-10-28 13:39:02.675453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:48.663 [2024-10-28 13:39:02.675471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.663 [2024-10-28 13:39:02.678869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.663 [2024-10-28 13:39:02.678928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:48.663 pt1 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:48.663 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.664 malloc2 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.664 [2024-10-28 13:39:02.708885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:48.664 [2024-10-28 13:39:02.708964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.664 [2024-10-28 13:39:02.708992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:48.664 [2024-10-28 13:39:02.709007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.664 [2024-10-28 13:39:02.712108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.664 [2024-10-28 13:39:02.712175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:48.664 pt2 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.664 malloc3 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.664 [2024-10-28 13:39:02.740786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:48.664 [2024-10-28 13:39:02.740922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.664 [2024-10-28 13:39:02.740970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:48.664 [2024-10-28 13:39:02.740999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.664 [2024-10-28 13:39:02.744513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.664 [2024-10-28 13:39:02.744562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:48.664 pt3 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.664 malloc4 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.664 [2024-10-28 13:39:02.787158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:48.664 [2024-10-28 13:39:02.787229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.664 [2024-10-28 13:39:02.787266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:27:48.664 [2024-10-28 13:39:02.787281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.664 [2024-10-28 13:39:02.790348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.664 [2024-10-28 13:39:02.790395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:48.664 pt4 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.664 [2024-10-28 13:39:02.795392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:48.664 [2024-10-28 13:39:02.798861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:48.664 [2024-10-28 13:39:02.799028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:48.664 [2024-10-28 13:39:02.799199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:48.664 [2024-10-28 13:39:02.799580] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:27:48.664 [2024-10-28 13:39:02.799632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:48.664 [2024-10-28 13:39:02.800042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:27:48.664 [2024-10-28 13:39:02.800306] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:27:48.664 [2024-10-28 13:39:02.800341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:27:48.664 [2024-10-28 13:39:02.800503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.664 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.923 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.923 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:48.923 "name": "raid_bdev1", 00:27:48.923 "uuid": "0cecb23c-4153-4fe6-853e-8514b5363ca9", 00:27:48.923 "strip_size_kb": 64, 00:27:48.923 "state": "online", 00:27:48.923 "raid_level": "concat", 00:27:48.923 "superblock": true, 00:27:48.923 "num_base_bdevs": 4, 00:27:48.923 "num_base_bdevs_discovered": 4, 00:27:48.923 "num_base_bdevs_operational": 4, 00:27:48.923 "base_bdevs_list": [ 00:27:48.923 { 00:27:48.923 "name": "pt1", 00:27:48.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:48.923 "is_configured": true, 00:27:48.924 "data_offset": 2048, 00:27:48.924 "data_size": 63488 00:27:48.924 }, 00:27:48.924 { 00:27:48.924 "name": "pt2", 00:27:48.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:48.924 "is_configured": true, 00:27:48.924 "data_offset": 2048, 00:27:48.924 "data_size": 63488 00:27:48.924 }, 00:27:48.924 { 00:27:48.924 "name": "pt3", 00:27:48.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:48.924 "is_configured": true, 00:27:48.924 "data_offset": 2048, 00:27:48.924 "data_size": 63488 00:27:48.924 }, 00:27:48.924 { 00:27:48.924 "name": "pt4", 00:27:48.924 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:48.924 "is_configured": true, 00:27:48.924 "data_offset": 2048, 00:27:48.924 "data_size": 63488 00:27:48.924 } 00:27:48.924 ] 00:27:48.924 }' 00:27:48.924 13:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:48.924 13:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:49.492 [2024-10-28 13:39:03.356217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:49.492 "name": "raid_bdev1", 00:27:49.492 "aliases": [ 00:27:49.492 "0cecb23c-4153-4fe6-853e-8514b5363ca9" 00:27:49.492 ], 00:27:49.492 "product_name": "Raid Volume", 00:27:49.492 "block_size": 512, 00:27:49.492 "num_blocks": 253952, 00:27:49.492 "uuid": "0cecb23c-4153-4fe6-853e-8514b5363ca9", 00:27:49.492 "assigned_rate_limits": { 00:27:49.492 "rw_ios_per_sec": 0, 00:27:49.492 "rw_mbytes_per_sec": 0, 00:27:49.492 "r_mbytes_per_sec": 0, 00:27:49.492 "w_mbytes_per_sec": 0 00:27:49.492 }, 00:27:49.492 "claimed": false, 00:27:49.492 "zoned": false, 00:27:49.492 "supported_io_types": { 00:27:49.492 "read": true, 00:27:49.492 "write": true, 00:27:49.492 "unmap": true, 00:27:49.492 "flush": true, 00:27:49.492 "reset": true, 00:27:49.492 "nvme_admin": false, 00:27:49.492 "nvme_io": false, 00:27:49.492 "nvme_io_md": false, 00:27:49.492 "write_zeroes": true, 00:27:49.492 "zcopy": false, 00:27:49.492 "get_zone_info": false, 00:27:49.492 "zone_management": false, 00:27:49.492 "zone_append": false, 00:27:49.492 "compare": false, 00:27:49.492 "compare_and_write": false, 00:27:49.492 "abort": false, 00:27:49.492 "seek_hole": false, 00:27:49.492 "seek_data": false, 00:27:49.492 "copy": false, 00:27:49.492 "nvme_iov_md": false 00:27:49.492 }, 00:27:49.492 "memory_domains": [ 00:27:49.492 { 00:27:49.492 "dma_device_id": "system", 00:27:49.492 "dma_device_type": 1 00:27:49.492 }, 00:27:49.492 { 00:27:49.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.492 "dma_device_type": 2 00:27:49.492 }, 00:27:49.492 { 00:27:49.492 "dma_device_id": "system", 00:27:49.492 "dma_device_type": 1 00:27:49.492 }, 00:27:49.492 { 00:27:49.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.492 "dma_device_type": 2 00:27:49.492 }, 00:27:49.492 { 00:27:49.492 "dma_device_id": "system", 00:27:49.492 "dma_device_type": 1 00:27:49.492 }, 00:27:49.492 { 00:27:49.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.492 "dma_device_type": 2 00:27:49.492 }, 00:27:49.492 { 00:27:49.492 "dma_device_id": "system", 00:27:49.492 "dma_device_type": 1 00:27:49.492 }, 00:27:49.492 { 00:27:49.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.492 "dma_device_type": 2 00:27:49.492 } 00:27:49.492 ], 00:27:49.492 "driver_specific": { 00:27:49.492 "raid": { 00:27:49.492 "uuid": "0cecb23c-4153-4fe6-853e-8514b5363ca9", 00:27:49.492 "strip_size_kb": 64, 00:27:49.492 "state": "online", 00:27:49.492 "raid_level": "concat", 00:27:49.492 "superblock": true, 00:27:49.492 "num_base_bdevs": 4, 00:27:49.492 "num_base_bdevs_discovered": 4, 00:27:49.492 "num_base_bdevs_operational": 4, 00:27:49.492 "base_bdevs_list": [ 00:27:49.492 { 00:27:49.492 "name": "pt1", 00:27:49.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:49.492 "is_configured": true, 00:27:49.492 "data_offset": 2048, 00:27:49.492 "data_size": 63488 00:27:49.492 }, 00:27:49.492 { 00:27:49.492 "name": "pt2", 00:27:49.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:49.492 "is_configured": true, 00:27:49.492 "data_offset": 2048, 00:27:49.492 "data_size": 63488 00:27:49.492 }, 00:27:49.492 { 00:27:49.492 "name": "pt3", 00:27:49.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:49.492 "is_configured": true, 00:27:49.492 "data_offset": 2048, 00:27:49.492 "data_size": 63488 00:27:49.492 }, 00:27:49.492 { 00:27:49.492 "name": "pt4", 00:27:49.492 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:49.492 "is_configured": true, 00:27:49.492 "data_offset": 2048, 00:27:49.492 "data_size": 63488 00:27:49.492 } 00:27:49.492 ] 00:27:49.492 } 00:27:49.492 } 00:27:49.492 }' 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:49.492 pt2 00:27:49.492 pt3 00:27:49.492 pt4' 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.492 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.751 [2024-10-28 13:39:03.736203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0cecb23c-4153-4fe6-853e-8514b5363ca9 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0cecb23c-4153-4fe6-853e-8514b5363ca9 ']' 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.751 [2024-10-28 13:39:03.787764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:49.751 [2024-10-28 13:39:03.787837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:49.751 [2024-10-28 13:39:03.787978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:49.751 [2024-10-28 13:39:03.788096] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:49.751 [2024-10-28 13:39:03.788122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.751 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.752 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:49.752 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:27:49.752 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.752 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.752 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.752 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:49.752 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.752 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.752 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.011 [2024-10-28 13:39:03.951913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:50.011 [2024-10-28 13:39:03.955098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:50.011 [2024-10-28 13:39:03.955269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:50.011 [2024-10-28 13:39:03.955354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:27:50.011 [2024-10-28 13:39:03.955472] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:50.011 [2024-10-28 13:39:03.955587] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:50.011 [2024-10-28 13:39:03.955625] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:27:50.011 [2024-10-28 13:39:03.955657] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:27:50.011 [2024-10-28 13:39:03.955681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:50.011 [2024-10-28 13:39:03.955698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:27:50.011 request: 00:27:50.011 { 00:27:50.011 "name": "raid_bdev1", 00:27:50.011 "raid_level": "concat", 00:27:50.011 "base_bdevs": [ 00:27:50.011 "malloc1", 00:27:50.011 "malloc2", 00:27:50.011 "malloc3", 00:27:50.011 "malloc4" 00:27:50.011 ], 00:27:50.011 "strip_size_kb": 64, 00:27:50.011 "superblock": false, 00:27:50.011 "method": "bdev_raid_create", 00:27:50.011 "req_id": 1 00:27:50.011 } 00:27:50.011 Got JSON-RPC error response 00:27:50.011 response: 00:27:50.011 { 00:27:50.011 "code": -17, 00:27:50.011 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:50.011 } 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.011 13:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:50.012 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.012 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.012 13:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.012 [2024-10-28 13:39:04.020033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:50.012 [2024-10-28 13:39:04.020140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:50.012 [2024-10-28 13:39:04.020198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:50.012 [2024-10-28 13:39:04.020218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:50.012 [2024-10-28 13:39:04.023581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:50.012 [2024-10-28 13:39:04.023634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:50.012 [2024-10-28 13:39:04.023729] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:50.012 [2024-10-28 13:39:04.023816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:50.012 pt1 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:50.012 "name": "raid_bdev1", 00:27:50.012 "uuid": "0cecb23c-4153-4fe6-853e-8514b5363ca9", 00:27:50.012 "strip_size_kb": 64, 00:27:50.012 "state": "configuring", 00:27:50.012 "raid_level": "concat", 00:27:50.012 "superblock": true, 00:27:50.012 "num_base_bdevs": 4, 00:27:50.012 "num_base_bdevs_discovered": 1, 00:27:50.012 "num_base_bdevs_operational": 4, 00:27:50.012 "base_bdevs_list": [ 00:27:50.012 { 00:27:50.012 "name": "pt1", 00:27:50.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:50.012 "is_configured": true, 00:27:50.012 "data_offset": 2048, 00:27:50.012 "data_size": 63488 00:27:50.012 }, 00:27:50.012 { 00:27:50.012 "name": null, 00:27:50.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:50.012 "is_configured": false, 00:27:50.012 "data_offset": 2048, 00:27:50.012 "data_size": 63488 00:27:50.012 }, 00:27:50.012 { 00:27:50.012 "name": null, 00:27:50.012 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:50.012 "is_configured": false, 00:27:50.012 "data_offset": 2048, 00:27:50.012 "data_size": 63488 00:27:50.012 }, 00:27:50.012 { 00:27:50.012 "name": null, 00:27:50.012 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:50.012 "is_configured": false, 00:27:50.012 "data_offset": 2048, 00:27:50.012 "data_size": 63488 00:27:50.012 } 00:27:50.012 ] 00:27:50.012 }' 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:50.012 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.579 [2024-10-28 13:39:04.568257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:50.579 [2024-10-28 13:39:04.568353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:50.579 [2024-10-28 13:39:04.568387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:27:50.579 [2024-10-28 13:39:04.568407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:50.579 [2024-10-28 13:39:04.569054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:50.579 [2024-10-28 13:39:04.569098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:50.579 [2024-10-28 13:39:04.569270] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:50.579 [2024-10-28 13:39:04.569312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:50.579 pt2 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.579 [2024-10-28 13:39:04.576120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:50.579 "name": "raid_bdev1", 00:27:50.579 "uuid": "0cecb23c-4153-4fe6-853e-8514b5363ca9", 00:27:50.579 "strip_size_kb": 64, 00:27:50.579 "state": "configuring", 00:27:50.579 "raid_level": "concat", 00:27:50.579 "superblock": true, 00:27:50.579 "num_base_bdevs": 4, 00:27:50.579 "num_base_bdevs_discovered": 1, 00:27:50.579 "num_base_bdevs_operational": 4, 00:27:50.579 "base_bdevs_list": [ 00:27:50.579 { 00:27:50.579 "name": "pt1", 00:27:50.579 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:50.579 "is_configured": true, 00:27:50.579 "data_offset": 2048, 00:27:50.579 "data_size": 63488 00:27:50.579 }, 00:27:50.579 { 00:27:50.579 "name": null, 00:27:50.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:50.579 "is_configured": false, 00:27:50.579 "data_offset": 0, 00:27:50.579 "data_size": 63488 00:27:50.579 }, 00:27:50.579 { 00:27:50.579 "name": null, 00:27:50.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:50.579 "is_configured": false, 00:27:50.579 "data_offset": 2048, 00:27:50.579 "data_size": 63488 00:27:50.579 }, 00:27:50.579 { 00:27:50.579 "name": null, 00:27:50.579 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:50.579 "is_configured": false, 00:27:50.579 "data_offset": 2048, 00:27:50.579 "data_size": 63488 00:27:50.579 } 00:27:50.579 ] 00:27:50.579 }' 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:50.579 13:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.147 [2024-10-28 13:39:05.116405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:51.147 [2024-10-28 13:39:05.116498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:51.147 [2024-10-28 13:39:05.116550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:51.147 [2024-10-28 13:39:05.116581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:51.147 [2024-10-28 13:39:05.117185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:51.147 [2024-10-28 13:39:05.117266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:51.147 [2024-10-28 13:39:05.117381] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:51.147 [2024-10-28 13:39:05.117417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:51.147 pt2 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.147 [2024-10-28 13:39:05.128364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:51.147 [2024-10-28 13:39:05.128425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:51.147 [2024-10-28 13:39:05.128455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:51.147 [2024-10-28 13:39:05.128468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:51.147 [2024-10-28 13:39:05.128935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:51.147 [2024-10-28 13:39:05.128972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:51.147 [2024-10-28 13:39:05.129045] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:51.147 [2024-10-28 13:39:05.129070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:51.147 pt3 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.147 [2024-10-28 13:39:05.136357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:51.147 [2024-10-28 13:39:05.136419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:51.147 [2024-10-28 13:39:05.136450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:51.147 [2024-10-28 13:39:05.136464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:51.147 [2024-10-28 13:39:05.136928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:51.147 [2024-10-28 13:39:05.136963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:51.147 [2024-10-28 13:39:05.137036] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:51.147 [2024-10-28 13:39:05.137061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:51.147 [2024-10-28 13:39:05.137275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:51.147 [2024-10-28 13:39:05.137312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:51.147 [2024-10-28 13:39:05.137688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:27:51.147 [2024-10-28 13:39:05.137844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:51.147 [2024-10-28 13:39:05.137870] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:27:51.147 [2024-10-28 13:39:05.137984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:51.147 pt4 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.147 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:51.147 "name": "raid_bdev1", 00:27:51.148 "uuid": "0cecb23c-4153-4fe6-853e-8514b5363ca9", 00:27:51.148 "strip_size_kb": 64, 00:27:51.148 "state": "online", 00:27:51.148 "raid_level": "concat", 00:27:51.148 "superblock": true, 00:27:51.148 "num_base_bdevs": 4, 00:27:51.148 "num_base_bdevs_discovered": 4, 00:27:51.148 "num_base_bdevs_operational": 4, 00:27:51.148 "base_bdevs_list": [ 00:27:51.148 { 00:27:51.148 "name": "pt1", 00:27:51.148 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:51.148 "is_configured": true, 00:27:51.148 "data_offset": 2048, 00:27:51.148 "data_size": 63488 00:27:51.148 }, 00:27:51.148 { 00:27:51.148 "name": "pt2", 00:27:51.148 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:51.148 "is_configured": true, 00:27:51.148 "data_offset": 2048, 00:27:51.148 "data_size": 63488 00:27:51.148 }, 00:27:51.148 { 00:27:51.148 "name": "pt3", 00:27:51.148 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:51.148 "is_configured": true, 00:27:51.148 "data_offset": 2048, 00:27:51.148 "data_size": 63488 00:27:51.148 }, 00:27:51.148 { 00:27:51.148 "name": "pt4", 00:27:51.148 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:51.148 "is_configured": true, 00:27:51.148 "data_offset": 2048, 00:27:51.148 "data_size": 63488 00:27:51.148 } 00:27:51.148 ] 00:27:51.148 }' 00:27:51.148 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:51.148 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.715 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:51.715 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:51.715 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:51.715 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:51.715 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:51.715 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:51.715 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:51.715 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.715 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.715 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:51.715 [2024-10-28 13:39:05.696948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:51.715 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.715 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:51.715 "name": "raid_bdev1", 00:27:51.715 "aliases": [ 00:27:51.715 "0cecb23c-4153-4fe6-853e-8514b5363ca9" 00:27:51.715 ], 00:27:51.715 "product_name": "Raid Volume", 00:27:51.715 "block_size": 512, 00:27:51.715 "num_blocks": 253952, 00:27:51.715 "uuid": "0cecb23c-4153-4fe6-853e-8514b5363ca9", 00:27:51.715 "assigned_rate_limits": { 00:27:51.715 "rw_ios_per_sec": 0, 00:27:51.715 "rw_mbytes_per_sec": 0, 00:27:51.715 "r_mbytes_per_sec": 0, 00:27:51.715 "w_mbytes_per_sec": 0 00:27:51.715 }, 00:27:51.715 "claimed": false, 00:27:51.715 "zoned": false, 00:27:51.715 "supported_io_types": { 00:27:51.715 "read": true, 00:27:51.715 "write": true, 00:27:51.715 "unmap": true, 00:27:51.715 "flush": true, 00:27:51.715 "reset": true, 00:27:51.715 "nvme_admin": false, 00:27:51.715 "nvme_io": false, 00:27:51.715 "nvme_io_md": false, 00:27:51.715 "write_zeroes": true, 00:27:51.715 "zcopy": false, 00:27:51.715 "get_zone_info": false, 00:27:51.715 "zone_management": false, 00:27:51.715 "zone_append": false, 00:27:51.715 "compare": false, 00:27:51.715 "compare_and_write": false, 00:27:51.715 "abort": false, 00:27:51.715 "seek_hole": false, 00:27:51.715 "seek_data": false, 00:27:51.715 "copy": false, 00:27:51.715 "nvme_iov_md": false 00:27:51.715 }, 00:27:51.715 "memory_domains": [ 00:27:51.715 { 00:27:51.715 "dma_device_id": "system", 00:27:51.715 "dma_device_type": 1 00:27:51.715 }, 00:27:51.715 { 00:27:51.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.715 "dma_device_type": 2 00:27:51.715 }, 00:27:51.716 { 00:27:51.716 "dma_device_id": "system", 00:27:51.716 "dma_device_type": 1 00:27:51.716 }, 00:27:51.716 { 00:27:51.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.716 "dma_device_type": 2 00:27:51.716 }, 00:27:51.716 { 00:27:51.716 "dma_device_id": "system", 00:27:51.716 "dma_device_type": 1 00:27:51.716 }, 00:27:51.716 { 00:27:51.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.716 "dma_device_type": 2 00:27:51.716 }, 00:27:51.716 { 00:27:51.716 "dma_device_id": "system", 00:27:51.716 "dma_device_type": 1 00:27:51.716 }, 00:27:51.716 { 00:27:51.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.716 "dma_device_type": 2 00:27:51.716 } 00:27:51.716 ], 00:27:51.716 "driver_specific": { 00:27:51.716 "raid": { 00:27:51.716 "uuid": "0cecb23c-4153-4fe6-853e-8514b5363ca9", 00:27:51.716 "strip_size_kb": 64, 00:27:51.716 "state": "online", 00:27:51.716 "raid_level": "concat", 00:27:51.716 "superblock": true, 00:27:51.716 "num_base_bdevs": 4, 00:27:51.716 "num_base_bdevs_discovered": 4, 00:27:51.716 "num_base_bdevs_operational": 4, 00:27:51.716 "base_bdevs_list": [ 00:27:51.716 { 00:27:51.716 "name": "pt1", 00:27:51.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:51.716 "is_configured": true, 00:27:51.716 "data_offset": 2048, 00:27:51.716 "data_size": 63488 00:27:51.716 }, 00:27:51.716 { 00:27:51.716 "name": "pt2", 00:27:51.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:51.716 "is_configured": true, 00:27:51.716 "data_offset": 2048, 00:27:51.716 "data_size": 63488 00:27:51.716 }, 00:27:51.716 { 00:27:51.716 "name": "pt3", 00:27:51.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:51.716 "is_configured": true, 00:27:51.716 "data_offset": 2048, 00:27:51.716 "data_size": 63488 00:27:51.716 }, 00:27:51.716 { 00:27:51.716 "name": "pt4", 00:27:51.716 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:51.716 "is_configured": true, 00:27:51.716 "data_offset": 2048, 00:27:51.716 "data_size": 63488 00:27:51.716 } 00:27:51.716 ] 00:27:51.716 } 00:27:51.716 } 00:27:51.716 }' 00:27:51.716 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:51.716 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:51.716 pt2 00:27:51.716 pt3 00:27:51.716 pt4' 00:27:51.716 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:51.716 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:51.716 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:51.716 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:51.716 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:51.716 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.716 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.716 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.974 13:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.974 [2024-10-28 13:39:06.089257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:51.974 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0cecb23c-4153-4fe6-853e-8514b5363ca9 '!=' 0cecb23c-4153-4fe6-853e-8514b5363ca9 ']' 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85376 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85376 ']' 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85376 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85376 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:52.233 killing process with pid 85376 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85376' 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85376 00:27:52.233 [2024-10-28 13:39:06.171953] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:52.233 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85376 00:27:52.233 [2024-10-28 13:39:06.172081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:52.233 [2024-10-28 13:39:06.172272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:52.234 [2024-10-28 13:39:06.172296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:52.234 [2024-10-28 13:39:06.230973] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:52.492 13:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:27:52.492 00:27:52.492 real 0m5.017s 00:27:52.492 user 0m8.035s 00:27:52.492 sys 0m1.001s 00:27:52.492 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:52.492 13:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.492 ************************************ 00:27:52.492 END TEST raid_superblock_test 00:27:52.492 ************************************ 00:27:52.492 13:39:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:27:52.492 13:39:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:52.492 13:39:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:52.492 13:39:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:52.492 ************************************ 00:27:52.492 START TEST raid_read_error_test 00:27:52.492 ************************************ 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BgrxR4gZJu 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85634 00:27:52.492 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85634 00:27:52.493 13:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85634 ']' 00:27:52.493 13:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.493 13:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:52.493 13:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:52.493 13:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.493 13:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:52.493 13:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.751 [2024-10-28 13:39:06.729509] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:27:52.751 [2024-10-28 13:39:06.729804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85634 ] 00:27:52.751 [2024-10-28 13:39:06.902538] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:53.010 [2024-10-28 13:39:06.936078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.010 [2024-10-28 13:39:07.004369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.010 [2024-10-28 13:39:07.090852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:53.010 [2024-10-28 13:39:07.090935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:53.946 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.947 BaseBdev1_malloc 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.947 true 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.947 [2024-10-28 13:39:07.836245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:53.947 [2024-10-28 13:39:07.836348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.947 [2024-10-28 13:39:07.836377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:53.947 [2024-10-28 13:39:07.836399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.947 [2024-10-28 13:39:07.839583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.947 [2024-10-28 13:39:07.839643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:53.947 BaseBdev1 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.947 BaseBdev2_malloc 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.947 true 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.947 [2024-10-28 13:39:07.875931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:53.947 [2024-10-28 13:39:07.876014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.947 [2024-10-28 13:39:07.876039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:53.947 [2024-10-28 13:39:07.876057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.947 [2024-10-28 13:39:07.879032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.947 [2024-10-28 13:39:07.879091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:53.947 BaseBdev2 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.947 BaseBdev3_malloc 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.947 true 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.947 [2024-10-28 13:39:07.911303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:53.947 [2024-10-28 13:39:07.911372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.947 [2024-10-28 13:39:07.911400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:53.947 [2024-10-28 13:39:07.911420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.947 [2024-10-28 13:39:07.914567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.947 [2024-10-28 13:39:07.914625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:53.947 BaseBdev3 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.947 BaseBdev4_malloc 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.947 true 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.947 [2024-10-28 13:39:07.968814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:53.947 [2024-10-28 13:39:07.968913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:53.947 [2024-10-28 13:39:07.968941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:53.947 [2024-10-28 13:39:07.968960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:53.947 [2024-10-28 13:39:07.972442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:53.947 [2024-10-28 13:39:07.972491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:53.947 BaseBdev4 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.947 [2024-10-28 13:39:07.976869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:53.947 [2024-10-28 13:39:07.979680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:53.947 [2024-10-28 13:39:07.979794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:53.947 [2024-10-28 13:39:07.979942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:53.947 [2024-10-28 13:39:07.980270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:53.947 [2024-10-28 13:39:07.980305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:53.947 [2024-10-28 13:39:07.980656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:27:53.947 [2024-10-28 13:39:07.980867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:53.947 [2024-10-28 13:39:07.980892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:53.947 [2024-10-28 13:39:07.981159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.947 13:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.947 13:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.947 13:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:53.947 "name": "raid_bdev1", 00:27:53.947 "uuid": "eae00f7c-97a3-427e-a098-064a2c371384", 00:27:53.947 "strip_size_kb": 64, 00:27:53.948 "state": "online", 00:27:53.948 "raid_level": "concat", 00:27:53.948 "superblock": true, 00:27:53.948 "num_base_bdevs": 4, 00:27:53.948 "num_base_bdevs_discovered": 4, 00:27:53.948 "num_base_bdevs_operational": 4, 00:27:53.948 "base_bdevs_list": [ 00:27:53.948 { 00:27:53.948 "name": "BaseBdev1", 00:27:53.948 "uuid": "0fa88548-45b9-5a4a-8f49-2b51bd468b33", 00:27:53.948 "is_configured": true, 00:27:53.948 "data_offset": 2048, 00:27:53.948 "data_size": 63488 00:27:53.948 }, 00:27:53.948 { 00:27:53.948 "name": "BaseBdev2", 00:27:53.948 "uuid": "2c1f3589-6e89-551f-961a-dae3dbe6d228", 00:27:53.948 "is_configured": true, 00:27:53.948 "data_offset": 2048, 00:27:53.948 "data_size": 63488 00:27:53.948 }, 00:27:53.948 { 00:27:53.948 "name": "BaseBdev3", 00:27:53.948 "uuid": "ae65bae8-623e-5032-a9aa-896a247a0d4b", 00:27:53.948 "is_configured": true, 00:27:53.948 "data_offset": 2048, 00:27:53.948 "data_size": 63488 00:27:53.948 }, 00:27:53.948 { 00:27:53.948 "name": "BaseBdev4", 00:27:53.948 "uuid": "7af03c22-582c-5ec6-bb52-b96fa703151d", 00:27:53.948 "is_configured": true, 00:27:53.948 "data_offset": 2048, 00:27:53.948 "data_size": 63488 00:27:53.948 } 00:27:53.948 ] 00:27:53.948 }' 00:27:53.948 13:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:53.948 13:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.515 13:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:54.515 13:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:54.515 [2024-10-28 13:39:08.646063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.451 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:55.451 "name": "raid_bdev1", 00:27:55.451 "uuid": "eae00f7c-97a3-427e-a098-064a2c371384", 00:27:55.451 "strip_size_kb": 64, 00:27:55.451 "state": "online", 00:27:55.451 "raid_level": "concat", 00:27:55.451 "superblock": true, 00:27:55.451 "num_base_bdevs": 4, 00:27:55.451 "num_base_bdevs_discovered": 4, 00:27:55.451 "num_base_bdevs_operational": 4, 00:27:55.451 "base_bdevs_list": [ 00:27:55.451 { 00:27:55.452 "name": "BaseBdev1", 00:27:55.452 "uuid": "0fa88548-45b9-5a4a-8f49-2b51bd468b33", 00:27:55.452 "is_configured": true, 00:27:55.452 "data_offset": 2048, 00:27:55.452 "data_size": 63488 00:27:55.452 }, 00:27:55.452 { 00:27:55.452 "name": "BaseBdev2", 00:27:55.452 "uuid": "2c1f3589-6e89-551f-961a-dae3dbe6d228", 00:27:55.452 "is_configured": true, 00:27:55.452 "data_offset": 2048, 00:27:55.452 "data_size": 63488 00:27:55.452 }, 00:27:55.452 { 00:27:55.452 "name": "BaseBdev3", 00:27:55.452 "uuid": "ae65bae8-623e-5032-a9aa-896a247a0d4b", 00:27:55.452 "is_configured": true, 00:27:55.452 "data_offset": 2048, 00:27:55.452 "data_size": 63488 00:27:55.452 }, 00:27:55.452 { 00:27:55.452 "name": "BaseBdev4", 00:27:55.452 "uuid": "7af03c22-582c-5ec6-bb52-b96fa703151d", 00:27:55.452 "is_configured": true, 00:27:55.452 "data_offset": 2048, 00:27:55.452 "data_size": 63488 00:27:55.452 } 00:27:55.452 ] 00:27:55.452 }' 00:27:55.452 13:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:55.452 13:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:56.018 [2024-10-28 13:39:10.044443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:56.018 [2024-10-28 13:39:10.044841] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:56.018 { 00:27:56.018 "results": [ 00:27:56.018 { 00:27:56.018 "job": "raid_bdev1", 00:27:56.018 "core_mask": "0x1", 00:27:56.018 "workload": "randrw", 00:27:56.018 "percentage": 50, 00:27:56.018 "status": "finished", 00:27:56.018 "queue_depth": 1, 00:27:56.018 "io_size": 131072, 00:27:56.018 "runtime": 1.39599, 00:27:56.018 "iops": 10376.14882628099, 00:27:56.018 "mibps": 1297.0186032851238, 00:27:56.018 "io_failed": 1, 00:27:56.018 "io_timeout": 0, 00:27:56.018 "avg_latency_us": 135.5993503445333, 00:27:56.018 "min_latency_us": 38.4, 00:27:56.018 "max_latency_us": 1683.0836363636363 00:27:56.018 } 00:27:56.018 ], 00:27:56.018 "core_count": 1 00:27:56.018 } 00:27:56.018 [2024-10-28 13:39:10.048275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:56.018 [2024-10-28 13:39:10.048453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:56.018 [2024-10-28 13:39:10.048553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:56.018 [2024-10-28 13:39:10.048573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85634 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85634 ']' 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85634 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85634 00:27:56.018 killing process with pid 85634 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85634' 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85634 00:27:56.018 [2024-10-28 13:39:10.094077] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:56.018 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85634 00:27:56.018 [2024-10-28 13:39:10.146180] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:56.586 13:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:27:56.586 13:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BgrxR4gZJu 00:27:56.586 13:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:27:56.586 13:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:27:56.586 13:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:27:56.586 13:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:56.586 13:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:56.586 13:39:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:27:56.586 00:27:56.586 real 0m3.863s 00:27:56.586 user 0m5.059s 00:27:56.586 sys 0m0.649s 00:27:56.586 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:56.586 ************************************ 00:27:56.586 END TEST raid_read_error_test 00:27:56.586 ************************************ 00:27:56.586 13:39:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:56.586 13:39:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:27:56.586 13:39:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:56.586 13:39:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:56.586 13:39:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:56.586 ************************************ 00:27:56.586 START TEST raid_write_error_test 00:27:56.586 ************************************ 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.29iwYZFnLs 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85767 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85767 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 85767 ']' 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:56.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:56.586 13:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:56.586 [2024-10-28 13:39:10.653915] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:27:56.586 [2024-10-28 13:39:10.654409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85767 ] 00:27:56.845 [2024-10-28 13:39:10.811899] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:56.845 [2024-10-28 13:39:10.839201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.845 [2024-10-28 13:39:10.895429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.845 [2024-10-28 13:39:10.972237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:56.845 [2024-10-28 13:39:10.972290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 BaseBdev1_malloc 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 true 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 [2024-10-28 13:39:11.698014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:57.786 [2024-10-28 13:39:11.698125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.786 [2024-10-28 13:39:11.698177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:57.786 [2024-10-28 13:39:11.698214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.786 [2024-10-28 13:39:11.701145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.786 [2024-10-28 13:39:11.701206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:57.786 BaseBdev1 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 BaseBdev2_malloc 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 true 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 [2024-10-28 13:39:11.742471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:57.786 [2024-10-28 13:39:11.742586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.786 [2024-10-28 13:39:11.742610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:57.786 [2024-10-28 13:39:11.742627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.786 [2024-10-28 13:39:11.745645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.786 [2024-10-28 13:39:11.745691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:57.786 BaseBdev2 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 BaseBdev3_malloc 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 true 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 [2024-10-28 13:39:11.782297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:57.786 [2024-10-28 13:39:11.782382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.786 [2024-10-28 13:39:11.782410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:57.786 [2024-10-28 13:39:11.782427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.786 [2024-10-28 13:39:11.785312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.786 [2024-10-28 13:39:11.785359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:57.786 BaseBdev3 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 BaseBdev4_malloc 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 true 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 [2024-10-28 13:39:11.834262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:57.786 [2024-10-28 13:39:11.834428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.786 [2024-10-28 13:39:11.834459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:57.786 [2024-10-28 13:39:11.834479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.786 [2024-10-28 13:39:11.837656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.786 [2024-10-28 13:39:11.837701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:57.786 BaseBdev4 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 [2024-10-28 13:39:11.842425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:57.786 [2024-10-28 13:39:11.845317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:57.786 [2024-10-28 13:39:11.845443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:57.786 [2024-10-28 13:39:11.845583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:57.786 [2024-10-28 13:39:11.845944] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:57.786 [2024-10-28 13:39:11.845991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:27:57.786 [2024-10-28 13:39:11.846597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:27:57.786 [2024-10-28 13:39:11.846894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:57.786 [2024-10-28 13:39:11.847008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:57.786 [2024-10-28 13:39:11.847481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.786 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:57.786 "name": "raid_bdev1", 00:27:57.786 "uuid": "41d73b16-6984-4379-8dec-a788d41f9601", 00:27:57.786 "strip_size_kb": 64, 00:27:57.787 "state": "online", 00:27:57.787 "raid_level": "concat", 00:27:57.787 "superblock": true, 00:27:57.787 "num_base_bdevs": 4, 00:27:57.787 "num_base_bdevs_discovered": 4, 00:27:57.787 "num_base_bdevs_operational": 4, 00:27:57.787 "base_bdevs_list": [ 00:27:57.787 { 00:27:57.787 "name": "BaseBdev1", 00:27:57.787 "uuid": "f9d35864-d0af-5660-8b34-0b903e9d8d17", 00:27:57.787 "is_configured": true, 00:27:57.787 "data_offset": 2048, 00:27:57.787 "data_size": 63488 00:27:57.787 }, 00:27:57.787 { 00:27:57.787 "name": "BaseBdev2", 00:27:57.787 "uuid": "5e6c2a93-e71d-5fcc-9276-d64297109802", 00:27:57.787 "is_configured": true, 00:27:57.787 "data_offset": 2048, 00:27:57.787 "data_size": 63488 00:27:57.787 }, 00:27:57.787 { 00:27:57.787 "name": "BaseBdev3", 00:27:57.787 "uuid": "71eec4a8-0859-57de-9b50-910e9bb7af90", 00:27:57.787 "is_configured": true, 00:27:57.787 "data_offset": 2048, 00:27:57.787 "data_size": 63488 00:27:57.787 }, 00:27:57.787 { 00:27:57.787 "name": "BaseBdev4", 00:27:57.787 "uuid": "8ebb8d71-c969-5c7b-bd0c-fbf5ddab1f5a", 00:27:57.787 "is_configured": true, 00:27:57.787 "data_offset": 2048, 00:27:57.787 "data_size": 63488 00:27:57.787 } 00:27:57.787 ] 00:27:57.787 }' 00:27:57.787 13:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:57.787 13:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.355 13:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:58.355 13:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:58.614 [2024-10-28 13:39:12.556483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:27:59.550 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:59.551 "name": "raid_bdev1", 00:27:59.551 "uuid": "41d73b16-6984-4379-8dec-a788d41f9601", 00:27:59.551 "strip_size_kb": 64, 00:27:59.551 "state": "online", 00:27:59.551 "raid_level": "concat", 00:27:59.551 "superblock": true, 00:27:59.551 "num_base_bdevs": 4, 00:27:59.551 "num_base_bdevs_discovered": 4, 00:27:59.551 "num_base_bdevs_operational": 4, 00:27:59.551 "base_bdevs_list": [ 00:27:59.551 { 00:27:59.551 "name": "BaseBdev1", 00:27:59.551 "uuid": "f9d35864-d0af-5660-8b34-0b903e9d8d17", 00:27:59.551 "is_configured": true, 00:27:59.551 "data_offset": 2048, 00:27:59.551 "data_size": 63488 00:27:59.551 }, 00:27:59.551 { 00:27:59.551 "name": "BaseBdev2", 00:27:59.551 "uuid": "5e6c2a93-e71d-5fcc-9276-d64297109802", 00:27:59.551 "is_configured": true, 00:27:59.551 "data_offset": 2048, 00:27:59.551 "data_size": 63488 00:27:59.551 }, 00:27:59.551 { 00:27:59.551 "name": "BaseBdev3", 00:27:59.551 "uuid": "71eec4a8-0859-57de-9b50-910e9bb7af90", 00:27:59.551 "is_configured": true, 00:27:59.551 "data_offset": 2048, 00:27:59.551 "data_size": 63488 00:27:59.551 }, 00:27:59.551 { 00:27:59.551 "name": "BaseBdev4", 00:27:59.551 "uuid": "8ebb8d71-c969-5c7b-bd0c-fbf5ddab1f5a", 00:27:59.551 "is_configured": true, 00:27:59.551 "data_offset": 2048, 00:27:59.551 "data_size": 63488 00:27:59.551 } 00:27:59.551 ] 00:27:59.551 }' 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:59.551 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.810 [2024-10-28 13:39:13.906006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:59.810 [2024-10-28 13:39:13.906075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:59.810 [2024-10-28 13:39:13.909117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:59.810 [2024-10-28 13:39:13.909192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:59.810 [2024-10-28 13:39:13.909270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:59.810 [2024-10-28 13:39:13.909289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:59.810 { 00:27:59.810 "results": [ 00:27:59.810 { 00:27:59.810 "job": "raid_bdev1", 00:27:59.810 "core_mask": "0x1", 00:27:59.810 "workload": "randrw", 00:27:59.810 "percentage": 50, 00:27:59.810 "status": "finished", 00:27:59.810 "queue_depth": 1, 00:27:59.810 "io_size": 131072, 00:27:59.810 "runtime": 1.346971, 00:27:59.810 "iops": 9680.98051108747, 00:27:59.810 "mibps": 1210.1225638859337, 00:27:59.810 "io_failed": 1, 00:27:59.810 "io_timeout": 0, 00:27:59.810 "avg_latency_us": 145.66807955329696, 00:27:59.810 "min_latency_us": 37.46909090909091, 00:27:59.810 "max_latency_us": 2010.7636363636364 00:27:59.810 } 00:27:59.810 ], 00:27:59.810 "core_count": 1 00:27:59.810 } 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85767 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 85767 ']' 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 85767 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85767 00:27:59.810 killing process with pid 85767 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85767' 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 85767 00:27:59.810 [2024-10-28 13:39:13.952623] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:59.810 13:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 85767 00:28:00.070 [2024-10-28 13:39:13.999076] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:00.330 13:39:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.29iwYZFnLs 00:28:00.330 13:39:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:28:00.330 13:39:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:28:00.330 13:39:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:28:00.330 13:39:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:28:00.330 13:39:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:00.330 13:39:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:28:00.330 13:39:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:28:00.330 00:28:00.330 real 0m3.804s 00:28:00.330 user 0m4.965s 00:28:00.330 sys 0m0.654s 00:28:00.330 13:39:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:00.330 ************************************ 00:28:00.330 END TEST raid_write_error_test 00:28:00.330 ************************************ 00:28:00.330 13:39:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.330 13:39:14 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:28:00.330 13:39:14 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:28:00.330 13:39:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:28:00.330 13:39:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:00.330 13:39:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:00.330 ************************************ 00:28:00.330 START TEST raid_state_function_test 00:28:00.330 ************************************ 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=85904 00:28:00.330 Process raid pid: 85904 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85904' 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 85904 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 85904 ']' 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:00.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:00.330 13:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:00.589 [2024-10-28 13:39:14.502352] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:28:00.589 [2024-10-28 13:39:14.502554] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.590 [2024-10-28 13:39:14.653368] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:00.590 [2024-10-28 13:39:14.686168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.848 [2024-10-28 13:39:14.756674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.848 [2024-10-28 13:39:14.841639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:00.848 [2024-10-28 13:39:14.842014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.415 [2024-10-28 13:39:15.554817] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:01.415 [2024-10-28 13:39:15.555130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:01.415 [2024-10-28 13:39:15.555294] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:01.415 [2024-10-28 13:39:15.555352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:01.415 [2024-10-28 13:39:15.555473] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:01.415 [2024-10-28 13:39:15.555558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:01.415 [2024-10-28 13:39:15.555680] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:01.415 [2024-10-28 13:39:15.555732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.415 13:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.692 13:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.692 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:01.692 "name": "Existed_Raid", 00:28:01.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.692 "strip_size_kb": 0, 00:28:01.692 "state": "configuring", 00:28:01.692 "raid_level": "raid1", 00:28:01.692 "superblock": false, 00:28:01.692 "num_base_bdevs": 4, 00:28:01.692 "num_base_bdevs_discovered": 0, 00:28:01.692 "num_base_bdevs_operational": 4, 00:28:01.692 "base_bdevs_list": [ 00:28:01.692 { 00:28:01.692 "name": "BaseBdev1", 00:28:01.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.692 "is_configured": false, 00:28:01.692 "data_offset": 0, 00:28:01.692 "data_size": 0 00:28:01.692 }, 00:28:01.692 { 00:28:01.692 "name": "BaseBdev2", 00:28:01.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.692 "is_configured": false, 00:28:01.692 "data_offset": 0, 00:28:01.692 "data_size": 0 00:28:01.692 }, 00:28:01.692 { 00:28:01.692 "name": "BaseBdev3", 00:28:01.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.692 "is_configured": false, 00:28:01.692 "data_offset": 0, 00:28:01.692 "data_size": 0 00:28:01.692 }, 00:28:01.692 { 00:28:01.692 "name": "BaseBdev4", 00:28:01.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.692 "is_configured": false, 00:28:01.692 "data_offset": 0, 00:28:01.692 "data_size": 0 00:28:01.692 } 00:28:01.692 ] 00:28:01.692 }' 00:28:01.692 13:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:01.692 13:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.957 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:01.957 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.957 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.957 [2024-10-28 13:39:16.086867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:01.957 [2024-10-28 13:39:16.086924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:28:01.957 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.957 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:01.957 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.957 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.957 [2024-10-28 13:39:16.094862] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:01.957 [2024-10-28 13:39:16.095068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:01.957 [2024-10-28 13:39:16.095253] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:01.957 [2024-10-28 13:39:16.095313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:01.957 [2024-10-28 13:39:16.095432] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:01.957 [2024-10-28 13:39:16.095504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:01.957 [2024-10-28 13:39:16.095563] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:01.957 [2024-10-28 13:39:16.095582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:01.957 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.957 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:01.957 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.957 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.215 [2024-10-28 13:39:16.119012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:02.215 BaseBdev1 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.215 [ 00:28:02.215 { 00:28:02.215 "name": "BaseBdev1", 00:28:02.215 "aliases": [ 00:28:02.215 "52af2052-cdd8-43c4-9477-fbc15d1f0b9b" 00:28:02.215 ], 00:28:02.215 "product_name": "Malloc disk", 00:28:02.215 "block_size": 512, 00:28:02.215 "num_blocks": 65536, 00:28:02.215 "uuid": "52af2052-cdd8-43c4-9477-fbc15d1f0b9b", 00:28:02.215 "assigned_rate_limits": { 00:28:02.215 "rw_ios_per_sec": 0, 00:28:02.215 "rw_mbytes_per_sec": 0, 00:28:02.215 "r_mbytes_per_sec": 0, 00:28:02.215 "w_mbytes_per_sec": 0 00:28:02.215 }, 00:28:02.215 "claimed": true, 00:28:02.215 "claim_type": "exclusive_write", 00:28:02.215 "zoned": false, 00:28:02.215 "supported_io_types": { 00:28:02.215 "read": true, 00:28:02.215 "write": true, 00:28:02.215 "unmap": true, 00:28:02.215 "flush": true, 00:28:02.215 "reset": true, 00:28:02.215 "nvme_admin": false, 00:28:02.215 "nvme_io": false, 00:28:02.215 "nvme_io_md": false, 00:28:02.215 "write_zeroes": true, 00:28:02.215 "zcopy": true, 00:28:02.215 "get_zone_info": false, 00:28:02.215 "zone_management": false, 00:28:02.215 "zone_append": false, 00:28:02.215 "compare": false, 00:28:02.215 "compare_and_write": false, 00:28:02.215 "abort": true, 00:28:02.215 "seek_hole": false, 00:28:02.215 "seek_data": false, 00:28:02.215 "copy": true, 00:28:02.215 "nvme_iov_md": false 00:28:02.215 }, 00:28:02.215 "memory_domains": [ 00:28:02.215 { 00:28:02.215 "dma_device_id": "system", 00:28:02.215 "dma_device_type": 1 00:28:02.215 }, 00:28:02.215 { 00:28:02.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:02.215 "dma_device_type": 2 00:28:02.215 } 00:28:02.215 ], 00:28:02.215 "driver_specific": {} 00:28:02.215 } 00:28:02.215 ] 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.215 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:02.215 "name": "Existed_Raid", 00:28:02.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.215 "strip_size_kb": 0, 00:28:02.215 "state": "configuring", 00:28:02.215 "raid_level": "raid1", 00:28:02.215 "superblock": false, 00:28:02.215 "num_base_bdevs": 4, 00:28:02.215 "num_base_bdevs_discovered": 1, 00:28:02.215 "num_base_bdevs_operational": 4, 00:28:02.215 "base_bdevs_list": [ 00:28:02.215 { 00:28:02.215 "name": "BaseBdev1", 00:28:02.215 "uuid": "52af2052-cdd8-43c4-9477-fbc15d1f0b9b", 00:28:02.215 "is_configured": true, 00:28:02.215 "data_offset": 0, 00:28:02.215 "data_size": 65536 00:28:02.215 }, 00:28:02.215 { 00:28:02.215 "name": "BaseBdev2", 00:28:02.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.215 "is_configured": false, 00:28:02.215 "data_offset": 0, 00:28:02.215 "data_size": 0 00:28:02.215 }, 00:28:02.215 { 00:28:02.215 "name": "BaseBdev3", 00:28:02.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.215 "is_configured": false, 00:28:02.215 "data_offset": 0, 00:28:02.215 "data_size": 0 00:28:02.215 }, 00:28:02.215 { 00:28:02.215 "name": "BaseBdev4", 00:28:02.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.215 "is_configured": false, 00:28:02.215 "data_offset": 0, 00:28:02.215 "data_size": 0 00:28:02.215 } 00:28:02.216 ] 00:28:02.216 }' 00:28:02.216 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:02.216 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.781 [2024-10-28 13:39:16.667250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:02.781 [2024-10-28 13:39:16.667347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.781 [2024-10-28 13:39:16.679235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:02.781 [2024-10-28 13:39:16.682245] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:02.781 [2024-10-28 13:39:16.682419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:02.781 [2024-10-28 13:39:16.682550] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:02.781 [2024-10-28 13:39:16.682604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:02.781 [2024-10-28 13:39:16.682804] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:02.781 [2024-10-28 13:39:16.682926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.781 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:02.781 "name": "Existed_Raid", 00:28:02.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.781 "strip_size_kb": 0, 00:28:02.781 "state": "configuring", 00:28:02.781 "raid_level": "raid1", 00:28:02.781 "superblock": false, 00:28:02.781 "num_base_bdevs": 4, 00:28:02.781 "num_base_bdevs_discovered": 1, 00:28:02.781 "num_base_bdevs_operational": 4, 00:28:02.781 "base_bdevs_list": [ 00:28:02.781 { 00:28:02.781 "name": "BaseBdev1", 00:28:02.781 "uuid": "52af2052-cdd8-43c4-9477-fbc15d1f0b9b", 00:28:02.781 "is_configured": true, 00:28:02.781 "data_offset": 0, 00:28:02.781 "data_size": 65536 00:28:02.781 }, 00:28:02.781 { 00:28:02.781 "name": "BaseBdev2", 00:28:02.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.781 "is_configured": false, 00:28:02.781 "data_offset": 0, 00:28:02.781 "data_size": 0 00:28:02.781 }, 00:28:02.781 { 00:28:02.781 "name": "BaseBdev3", 00:28:02.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.781 "is_configured": false, 00:28:02.781 "data_offset": 0, 00:28:02.781 "data_size": 0 00:28:02.781 }, 00:28:02.781 { 00:28:02.781 "name": "BaseBdev4", 00:28:02.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.781 "is_configured": false, 00:28:02.781 "data_offset": 0, 00:28:02.781 "data_size": 0 00:28:02.782 } 00:28:02.782 ] 00:28:02.782 }' 00:28:02.782 13:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:02.782 13:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.348 [2024-10-28 13:39:17.230383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:03.348 BaseBdev2 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.348 [ 00:28:03.348 { 00:28:03.348 "name": "BaseBdev2", 00:28:03.348 "aliases": [ 00:28:03.348 "1e66aa9e-819f-423f-85a2-c7503f6e1292" 00:28:03.348 ], 00:28:03.348 "product_name": "Malloc disk", 00:28:03.348 "block_size": 512, 00:28:03.348 "num_blocks": 65536, 00:28:03.348 "uuid": "1e66aa9e-819f-423f-85a2-c7503f6e1292", 00:28:03.348 "assigned_rate_limits": { 00:28:03.348 "rw_ios_per_sec": 0, 00:28:03.348 "rw_mbytes_per_sec": 0, 00:28:03.348 "r_mbytes_per_sec": 0, 00:28:03.348 "w_mbytes_per_sec": 0 00:28:03.348 }, 00:28:03.348 "claimed": true, 00:28:03.348 "claim_type": "exclusive_write", 00:28:03.348 "zoned": false, 00:28:03.348 "supported_io_types": { 00:28:03.348 "read": true, 00:28:03.348 "write": true, 00:28:03.348 "unmap": true, 00:28:03.348 "flush": true, 00:28:03.348 "reset": true, 00:28:03.348 "nvme_admin": false, 00:28:03.348 "nvme_io": false, 00:28:03.348 "nvme_io_md": false, 00:28:03.348 "write_zeroes": true, 00:28:03.348 "zcopy": true, 00:28:03.348 "get_zone_info": false, 00:28:03.348 "zone_management": false, 00:28:03.348 "zone_append": false, 00:28:03.348 "compare": false, 00:28:03.348 "compare_and_write": false, 00:28:03.348 "abort": true, 00:28:03.348 "seek_hole": false, 00:28:03.348 "seek_data": false, 00:28:03.348 "copy": true, 00:28:03.348 "nvme_iov_md": false 00:28:03.348 }, 00:28:03.348 "memory_domains": [ 00:28:03.348 { 00:28:03.348 "dma_device_id": "system", 00:28:03.348 "dma_device_type": 1 00:28:03.348 }, 00:28:03.348 { 00:28:03.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:03.348 "dma_device_type": 2 00:28:03.348 } 00:28:03.348 ], 00:28:03.348 "driver_specific": {} 00:28:03.348 } 00:28:03.348 ] 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:03.348 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:03.349 "name": "Existed_Raid", 00:28:03.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.349 "strip_size_kb": 0, 00:28:03.349 "state": "configuring", 00:28:03.349 "raid_level": "raid1", 00:28:03.349 "superblock": false, 00:28:03.349 "num_base_bdevs": 4, 00:28:03.349 "num_base_bdevs_discovered": 2, 00:28:03.349 "num_base_bdevs_operational": 4, 00:28:03.349 "base_bdevs_list": [ 00:28:03.349 { 00:28:03.349 "name": "BaseBdev1", 00:28:03.349 "uuid": "52af2052-cdd8-43c4-9477-fbc15d1f0b9b", 00:28:03.349 "is_configured": true, 00:28:03.349 "data_offset": 0, 00:28:03.349 "data_size": 65536 00:28:03.349 }, 00:28:03.349 { 00:28:03.349 "name": "BaseBdev2", 00:28:03.349 "uuid": "1e66aa9e-819f-423f-85a2-c7503f6e1292", 00:28:03.349 "is_configured": true, 00:28:03.349 "data_offset": 0, 00:28:03.349 "data_size": 65536 00:28:03.349 }, 00:28:03.349 { 00:28:03.349 "name": "BaseBdev3", 00:28:03.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.349 "is_configured": false, 00:28:03.349 "data_offset": 0, 00:28:03.349 "data_size": 0 00:28:03.349 }, 00:28:03.349 { 00:28:03.349 "name": "BaseBdev4", 00:28:03.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.349 "is_configured": false, 00:28:03.349 "data_offset": 0, 00:28:03.349 "data_size": 0 00:28:03.349 } 00:28:03.349 ] 00:28:03.349 }' 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:03.349 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.607 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:03.607 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.607 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.866 [2024-10-28 13:39:17.794877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:03.866 BaseBdev3 00:28:03.866 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.866 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:28:03.866 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:28:03.866 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:03.866 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:03.866 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:03.866 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:03.866 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:03.866 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.866 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.866 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.866 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:03.866 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.866 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.866 [ 00:28:03.866 { 00:28:03.866 "name": "BaseBdev3", 00:28:03.866 "aliases": [ 00:28:03.866 "1426bdf0-25f7-4f29-95ce-ef5dd2a69682" 00:28:03.866 ], 00:28:03.866 "product_name": "Malloc disk", 00:28:03.866 "block_size": 512, 00:28:03.866 "num_blocks": 65536, 00:28:03.866 "uuid": "1426bdf0-25f7-4f29-95ce-ef5dd2a69682", 00:28:03.867 "assigned_rate_limits": { 00:28:03.867 "rw_ios_per_sec": 0, 00:28:03.867 "rw_mbytes_per_sec": 0, 00:28:03.867 "r_mbytes_per_sec": 0, 00:28:03.867 "w_mbytes_per_sec": 0 00:28:03.867 }, 00:28:03.867 "claimed": true, 00:28:03.867 "claim_type": "exclusive_write", 00:28:03.867 "zoned": false, 00:28:03.867 "supported_io_types": { 00:28:03.867 "read": true, 00:28:03.867 "write": true, 00:28:03.867 "unmap": true, 00:28:03.867 "flush": true, 00:28:03.867 "reset": true, 00:28:03.867 "nvme_admin": false, 00:28:03.867 "nvme_io": false, 00:28:03.867 "nvme_io_md": false, 00:28:03.867 "write_zeroes": true, 00:28:03.867 "zcopy": true, 00:28:03.867 "get_zone_info": false, 00:28:03.867 "zone_management": false, 00:28:03.867 "zone_append": false, 00:28:03.867 "compare": false, 00:28:03.867 "compare_and_write": false, 00:28:03.867 "abort": true, 00:28:03.867 "seek_hole": false, 00:28:03.867 "seek_data": false, 00:28:03.867 "copy": true, 00:28:03.867 "nvme_iov_md": false 00:28:03.867 }, 00:28:03.867 "memory_domains": [ 00:28:03.867 { 00:28:03.867 "dma_device_id": "system", 00:28:03.867 "dma_device_type": 1 00:28:03.867 }, 00:28:03.867 { 00:28:03.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:03.867 "dma_device_type": 2 00:28:03.867 } 00:28:03.867 ], 00:28:03.867 "driver_specific": {} 00:28:03.867 } 00:28:03.867 ] 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:03.867 "name": "Existed_Raid", 00:28:03.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.867 "strip_size_kb": 0, 00:28:03.867 "state": "configuring", 00:28:03.867 "raid_level": "raid1", 00:28:03.867 "superblock": false, 00:28:03.867 "num_base_bdevs": 4, 00:28:03.867 "num_base_bdevs_discovered": 3, 00:28:03.867 "num_base_bdevs_operational": 4, 00:28:03.867 "base_bdevs_list": [ 00:28:03.867 { 00:28:03.867 "name": "BaseBdev1", 00:28:03.867 "uuid": "52af2052-cdd8-43c4-9477-fbc15d1f0b9b", 00:28:03.867 "is_configured": true, 00:28:03.867 "data_offset": 0, 00:28:03.867 "data_size": 65536 00:28:03.867 }, 00:28:03.867 { 00:28:03.867 "name": "BaseBdev2", 00:28:03.867 "uuid": "1e66aa9e-819f-423f-85a2-c7503f6e1292", 00:28:03.867 "is_configured": true, 00:28:03.867 "data_offset": 0, 00:28:03.867 "data_size": 65536 00:28:03.867 }, 00:28:03.867 { 00:28:03.867 "name": "BaseBdev3", 00:28:03.867 "uuid": "1426bdf0-25f7-4f29-95ce-ef5dd2a69682", 00:28:03.867 "is_configured": true, 00:28:03.867 "data_offset": 0, 00:28:03.867 "data_size": 65536 00:28:03.867 }, 00:28:03.867 { 00:28:03.867 "name": "BaseBdev4", 00:28:03.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.867 "is_configured": false, 00:28:03.867 "data_offset": 0, 00:28:03.867 "data_size": 0 00:28:03.867 } 00:28:03.867 ] 00:28:03.867 }' 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:03.867 13:39:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.434 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:28:04.434 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.434 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.434 [2024-10-28 13:39:18.393345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:04.434 [2024-10-28 13:39:18.393428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:04.434 [2024-10-28 13:39:18.393461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:04.434 [2024-10-28 13:39:18.393883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:28:04.434 [2024-10-28 13:39:18.394091] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:04.434 [2024-10-28 13:39:18.394132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:28:04.434 BaseBdev4 00:28:04.434 [2024-10-28 13:39:18.394473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:04.434 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.434 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:28:04.434 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:28:04.434 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:04.434 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.435 [ 00:28:04.435 { 00:28:04.435 "name": "BaseBdev4", 00:28:04.435 "aliases": [ 00:28:04.435 "cf6c0d2d-937d-4e6c-9246-ba79dc526dd4" 00:28:04.435 ], 00:28:04.435 "product_name": "Malloc disk", 00:28:04.435 "block_size": 512, 00:28:04.435 "num_blocks": 65536, 00:28:04.435 "uuid": "cf6c0d2d-937d-4e6c-9246-ba79dc526dd4", 00:28:04.435 "assigned_rate_limits": { 00:28:04.435 "rw_ios_per_sec": 0, 00:28:04.435 "rw_mbytes_per_sec": 0, 00:28:04.435 "r_mbytes_per_sec": 0, 00:28:04.435 "w_mbytes_per_sec": 0 00:28:04.435 }, 00:28:04.435 "claimed": true, 00:28:04.435 "claim_type": "exclusive_write", 00:28:04.435 "zoned": false, 00:28:04.435 "supported_io_types": { 00:28:04.435 "read": true, 00:28:04.435 "write": true, 00:28:04.435 "unmap": true, 00:28:04.435 "flush": true, 00:28:04.435 "reset": true, 00:28:04.435 "nvme_admin": false, 00:28:04.435 "nvme_io": false, 00:28:04.435 "nvme_io_md": false, 00:28:04.435 "write_zeroes": true, 00:28:04.435 "zcopy": true, 00:28:04.435 "get_zone_info": false, 00:28:04.435 "zone_management": false, 00:28:04.435 "zone_append": false, 00:28:04.435 "compare": false, 00:28:04.435 "compare_and_write": false, 00:28:04.435 "abort": true, 00:28:04.435 "seek_hole": false, 00:28:04.435 "seek_data": false, 00:28:04.435 "copy": true, 00:28:04.435 "nvme_iov_md": false 00:28:04.435 }, 00:28:04.435 "memory_domains": [ 00:28:04.435 { 00:28:04.435 "dma_device_id": "system", 00:28:04.435 "dma_device_type": 1 00:28:04.435 }, 00:28:04.435 { 00:28:04.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.435 "dma_device_type": 2 00:28:04.435 } 00:28:04.435 ], 00:28:04.435 "driver_specific": {} 00:28:04.435 } 00:28:04.435 ] 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:04.435 "name": "Existed_Raid", 00:28:04.435 "uuid": "9269d35b-cbac-4012-86dc-1deb8035cb38", 00:28:04.435 "strip_size_kb": 0, 00:28:04.435 "state": "online", 00:28:04.435 "raid_level": "raid1", 00:28:04.435 "superblock": false, 00:28:04.435 "num_base_bdevs": 4, 00:28:04.435 "num_base_bdevs_discovered": 4, 00:28:04.435 "num_base_bdevs_operational": 4, 00:28:04.435 "base_bdevs_list": [ 00:28:04.435 { 00:28:04.435 "name": "BaseBdev1", 00:28:04.435 "uuid": "52af2052-cdd8-43c4-9477-fbc15d1f0b9b", 00:28:04.435 "is_configured": true, 00:28:04.435 "data_offset": 0, 00:28:04.435 "data_size": 65536 00:28:04.435 }, 00:28:04.435 { 00:28:04.435 "name": "BaseBdev2", 00:28:04.435 "uuid": "1e66aa9e-819f-423f-85a2-c7503f6e1292", 00:28:04.435 "is_configured": true, 00:28:04.435 "data_offset": 0, 00:28:04.435 "data_size": 65536 00:28:04.435 }, 00:28:04.435 { 00:28:04.435 "name": "BaseBdev3", 00:28:04.435 "uuid": "1426bdf0-25f7-4f29-95ce-ef5dd2a69682", 00:28:04.435 "is_configured": true, 00:28:04.435 "data_offset": 0, 00:28:04.435 "data_size": 65536 00:28:04.435 }, 00:28:04.435 { 00:28:04.435 "name": "BaseBdev4", 00:28:04.435 "uuid": "cf6c0d2d-937d-4e6c-9246-ba79dc526dd4", 00:28:04.435 "is_configured": true, 00:28:04.435 "data_offset": 0, 00:28:04.435 "data_size": 65536 00:28:04.435 } 00:28:04.435 ] 00:28:04.435 }' 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:04.435 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.001 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:05.001 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:05.001 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:05.001 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:05.001 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:05.001 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:05.001 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:05.001 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:05.001 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.001 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.001 [2024-10-28 13:39:18.962202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:05.001 13:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.001 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:05.001 "name": "Existed_Raid", 00:28:05.001 "aliases": [ 00:28:05.001 "9269d35b-cbac-4012-86dc-1deb8035cb38" 00:28:05.001 ], 00:28:05.001 "product_name": "Raid Volume", 00:28:05.001 "block_size": 512, 00:28:05.001 "num_blocks": 65536, 00:28:05.001 "uuid": "9269d35b-cbac-4012-86dc-1deb8035cb38", 00:28:05.001 "assigned_rate_limits": { 00:28:05.001 "rw_ios_per_sec": 0, 00:28:05.001 "rw_mbytes_per_sec": 0, 00:28:05.001 "r_mbytes_per_sec": 0, 00:28:05.001 "w_mbytes_per_sec": 0 00:28:05.001 }, 00:28:05.001 "claimed": false, 00:28:05.001 "zoned": false, 00:28:05.001 "supported_io_types": { 00:28:05.001 "read": true, 00:28:05.001 "write": true, 00:28:05.001 "unmap": false, 00:28:05.001 "flush": false, 00:28:05.001 "reset": true, 00:28:05.001 "nvme_admin": false, 00:28:05.001 "nvme_io": false, 00:28:05.001 "nvme_io_md": false, 00:28:05.001 "write_zeroes": true, 00:28:05.001 "zcopy": false, 00:28:05.001 "get_zone_info": false, 00:28:05.001 "zone_management": false, 00:28:05.001 "zone_append": false, 00:28:05.001 "compare": false, 00:28:05.001 "compare_and_write": false, 00:28:05.001 "abort": false, 00:28:05.001 "seek_hole": false, 00:28:05.001 "seek_data": false, 00:28:05.001 "copy": false, 00:28:05.001 "nvme_iov_md": false 00:28:05.001 }, 00:28:05.001 "memory_domains": [ 00:28:05.001 { 00:28:05.001 "dma_device_id": "system", 00:28:05.001 "dma_device_type": 1 00:28:05.001 }, 00:28:05.001 { 00:28:05.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.001 "dma_device_type": 2 00:28:05.001 }, 00:28:05.001 { 00:28:05.001 "dma_device_id": "system", 00:28:05.001 "dma_device_type": 1 00:28:05.001 }, 00:28:05.001 { 00:28:05.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.001 "dma_device_type": 2 00:28:05.001 }, 00:28:05.001 { 00:28:05.001 "dma_device_id": "system", 00:28:05.001 "dma_device_type": 1 00:28:05.001 }, 00:28:05.001 { 00:28:05.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.001 "dma_device_type": 2 00:28:05.001 }, 00:28:05.001 { 00:28:05.001 "dma_device_id": "system", 00:28:05.001 "dma_device_type": 1 00:28:05.001 }, 00:28:05.001 { 00:28:05.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.001 "dma_device_type": 2 00:28:05.001 } 00:28:05.001 ], 00:28:05.001 "driver_specific": { 00:28:05.001 "raid": { 00:28:05.001 "uuid": "9269d35b-cbac-4012-86dc-1deb8035cb38", 00:28:05.001 "strip_size_kb": 0, 00:28:05.001 "state": "online", 00:28:05.001 "raid_level": "raid1", 00:28:05.001 "superblock": false, 00:28:05.001 "num_base_bdevs": 4, 00:28:05.001 "num_base_bdevs_discovered": 4, 00:28:05.001 "num_base_bdevs_operational": 4, 00:28:05.001 "base_bdevs_list": [ 00:28:05.001 { 00:28:05.001 "name": "BaseBdev1", 00:28:05.001 "uuid": "52af2052-cdd8-43c4-9477-fbc15d1f0b9b", 00:28:05.001 "is_configured": true, 00:28:05.001 "data_offset": 0, 00:28:05.001 "data_size": 65536 00:28:05.001 }, 00:28:05.001 { 00:28:05.001 "name": "BaseBdev2", 00:28:05.001 "uuid": "1e66aa9e-819f-423f-85a2-c7503f6e1292", 00:28:05.001 "is_configured": true, 00:28:05.001 "data_offset": 0, 00:28:05.001 "data_size": 65536 00:28:05.001 }, 00:28:05.001 { 00:28:05.001 "name": "BaseBdev3", 00:28:05.001 "uuid": "1426bdf0-25f7-4f29-95ce-ef5dd2a69682", 00:28:05.001 "is_configured": true, 00:28:05.001 "data_offset": 0, 00:28:05.001 "data_size": 65536 00:28:05.001 }, 00:28:05.001 { 00:28:05.001 "name": "BaseBdev4", 00:28:05.001 "uuid": "cf6c0d2d-937d-4e6c-9246-ba79dc526dd4", 00:28:05.001 "is_configured": true, 00:28:05.001 "data_offset": 0, 00:28:05.001 "data_size": 65536 00:28:05.001 } 00:28:05.001 ] 00:28:05.001 } 00:28:05.001 } 00:28:05.001 }' 00:28:05.001 13:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:05.001 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:05.001 BaseBdev2 00:28:05.001 BaseBdev3 00:28:05.001 BaseBdev4' 00:28:05.001 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:05.001 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:05.001 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:05.001 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:05.001 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.001 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.001 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:05.001 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.259 [2024-10-28 13:39:19.349768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.259 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.517 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:05.517 "name": "Existed_Raid", 00:28:05.517 "uuid": "9269d35b-cbac-4012-86dc-1deb8035cb38", 00:28:05.517 "strip_size_kb": 0, 00:28:05.517 "state": "online", 00:28:05.517 "raid_level": "raid1", 00:28:05.517 "superblock": false, 00:28:05.517 "num_base_bdevs": 4, 00:28:05.517 "num_base_bdevs_discovered": 3, 00:28:05.517 "num_base_bdevs_operational": 3, 00:28:05.517 "base_bdevs_list": [ 00:28:05.517 { 00:28:05.517 "name": null, 00:28:05.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.517 "is_configured": false, 00:28:05.517 "data_offset": 0, 00:28:05.517 "data_size": 65536 00:28:05.517 }, 00:28:05.517 { 00:28:05.517 "name": "BaseBdev2", 00:28:05.517 "uuid": "1e66aa9e-819f-423f-85a2-c7503f6e1292", 00:28:05.517 "is_configured": true, 00:28:05.517 "data_offset": 0, 00:28:05.517 "data_size": 65536 00:28:05.517 }, 00:28:05.517 { 00:28:05.517 "name": "BaseBdev3", 00:28:05.517 "uuid": "1426bdf0-25f7-4f29-95ce-ef5dd2a69682", 00:28:05.517 "is_configured": true, 00:28:05.517 "data_offset": 0, 00:28:05.517 "data_size": 65536 00:28:05.517 }, 00:28:05.517 { 00:28:05.517 "name": "BaseBdev4", 00:28:05.517 "uuid": "cf6c0d2d-937d-4e6c-9246-ba79dc526dd4", 00:28:05.517 "is_configured": true, 00:28:05.517 "data_offset": 0, 00:28:05.517 "data_size": 65536 00:28:05.517 } 00:28:05.517 ] 00:28:05.517 }' 00:28:05.517 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:05.517 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.779 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:05.779 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:05.779 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.779 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.779 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.779 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:06.045 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.045 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:06.045 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:06.045 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:06.045 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.045 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.045 [2024-10-28 13:39:19.982814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:06.045 13:39:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.045 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:06.045 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:06.045 13:39:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.045 [2024-10-28 13:39:20.057382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.045 [2024-10-28 13:39:20.141715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:06.045 [2024-10-28 13:39:20.141910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:06.045 [2024-10-28 13:39:20.159709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:06.045 [2024-10-28 13:39:20.159799] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:06.045 [2024-10-28 13:39:20.159822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.045 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:06.046 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.046 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.304 BaseBdev2 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.304 [ 00:28:06.304 { 00:28:06.304 "name": "BaseBdev2", 00:28:06.304 "aliases": [ 00:28:06.304 "41511094-f1b0-4c60-9bcc-0de935387e68" 00:28:06.304 ], 00:28:06.304 "product_name": "Malloc disk", 00:28:06.304 "block_size": 512, 00:28:06.304 "num_blocks": 65536, 00:28:06.304 "uuid": "41511094-f1b0-4c60-9bcc-0de935387e68", 00:28:06.304 "assigned_rate_limits": { 00:28:06.304 "rw_ios_per_sec": 0, 00:28:06.304 "rw_mbytes_per_sec": 0, 00:28:06.304 "r_mbytes_per_sec": 0, 00:28:06.304 "w_mbytes_per_sec": 0 00:28:06.304 }, 00:28:06.304 "claimed": false, 00:28:06.304 "zoned": false, 00:28:06.304 "supported_io_types": { 00:28:06.304 "read": true, 00:28:06.304 "write": true, 00:28:06.304 "unmap": true, 00:28:06.304 "flush": true, 00:28:06.304 "reset": true, 00:28:06.304 "nvme_admin": false, 00:28:06.304 "nvme_io": false, 00:28:06.304 "nvme_io_md": false, 00:28:06.304 "write_zeroes": true, 00:28:06.304 "zcopy": true, 00:28:06.304 "get_zone_info": false, 00:28:06.304 "zone_management": false, 00:28:06.304 "zone_append": false, 00:28:06.304 "compare": false, 00:28:06.304 "compare_and_write": false, 00:28:06.304 "abort": true, 00:28:06.304 "seek_hole": false, 00:28:06.304 "seek_data": false, 00:28:06.304 "copy": true, 00:28:06.304 "nvme_iov_md": false 00:28:06.304 }, 00:28:06.304 "memory_domains": [ 00:28:06.304 { 00:28:06.304 "dma_device_id": "system", 00:28:06.304 "dma_device_type": 1 00:28:06.304 }, 00:28:06.304 { 00:28:06.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.304 "dma_device_type": 2 00:28:06.304 } 00:28:06.304 ], 00:28:06.304 "driver_specific": {} 00:28:06.304 } 00:28:06.304 ] 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.304 BaseBdev3 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.304 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.304 [ 00:28:06.304 { 00:28:06.304 "name": "BaseBdev3", 00:28:06.304 "aliases": [ 00:28:06.304 "1520614e-eddd-4aaf-8c24-552211fc21a5" 00:28:06.304 ], 00:28:06.304 "product_name": "Malloc disk", 00:28:06.304 "block_size": 512, 00:28:06.304 "num_blocks": 65536, 00:28:06.304 "uuid": "1520614e-eddd-4aaf-8c24-552211fc21a5", 00:28:06.304 "assigned_rate_limits": { 00:28:06.304 "rw_ios_per_sec": 0, 00:28:06.304 "rw_mbytes_per_sec": 0, 00:28:06.304 "r_mbytes_per_sec": 0, 00:28:06.304 "w_mbytes_per_sec": 0 00:28:06.304 }, 00:28:06.304 "claimed": false, 00:28:06.304 "zoned": false, 00:28:06.304 "supported_io_types": { 00:28:06.304 "read": true, 00:28:06.304 "write": true, 00:28:06.304 "unmap": true, 00:28:06.304 "flush": true, 00:28:06.304 "reset": true, 00:28:06.304 "nvme_admin": false, 00:28:06.304 "nvme_io": false, 00:28:06.304 "nvme_io_md": false, 00:28:06.304 "write_zeroes": true, 00:28:06.304 "zcopy": true, 00:28:06.304 "get_zone_info": false, 00:28:06.304 "zone_management": false, 00:28:06.304 "zone_append": false, 00:28:06.304 "compare": false, 00:28:06.304 "compare_and_write": false, 00:28:06.304 "abort": true, 00:28:06.304 "seek_hole": false, 00:28:06.304 "seek_data": false, 00:28:06.304 "copy": true, 00:28:06.304 "nvme_iov_md": false 00:28:06.304 }, 00:28:06.304 "memory_domains": [ 00:28:06.304 { 00:28:06.304 "dma_device_id": "system", 00:28:06.304 "dma_device_type": 1 00:28:06.304 }, 00:28:06.304 { 00:28:06.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.305 "dma_device_type": 2 00:28:06.305 } 00:28:06.305 ], 00:28:06.305 "driver_specific": {} 00:28:06.305 } 00:28:06.305 ] 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.305 BaseBdev4 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.305 [ 00:28:06.305 { 00:28:06.305 "name": "BaseBdev4", 00:28:06.305 "aliases": [ 00:28:06.305 "2243f7a5-886c-4473-98a8-0b5fa9460e78" 00:28:06.305 ], 00:28:06.305 "product_name": "Malloc disk", 00:28:06.305 "block_size": 512, 00:28:06.305 "num_blocks": 65536, 00:28:06.305 "uuid": "2243f7a5-886c-4473-98a8-0b5fa9460e78", 00:28:06.305 "assigned_rate_limits": { 00:28:06.305 "rw_ios_per_sec": 0, 00:28:06.305 "rw_mbytes_per_sec": 0, 00:28:06.305 "r_mbytes_per_sec": 0, 00:28:06.305 "w_mbytes_per_sec": 0 00:28:06.305 }, 00:28:06.305 "claimed": false, 00:28:06.305 "zoned": false, 00:28:06.305 "supported_io_types": { 00:28:06.305 "read": true, 00:28:06.305 "write": true, 00:28:06.305 "unmap": true, 00:28:06.305 "flush": true, 00:28:06.305 "reset": true, 00:28:06.305 "nvme_admin": false, 00:28:06.305 "nvme_io": false, 00:28:06.305 "nvme_io_md": false, 00:28:06.305 "write_zeroes": true, 00:28:06.305 "zcopy": true, 00:28:06.305 "get_zone_info": false, 00:28:06.305 "zone_management": false, 00:28:06.305 "zone_append": false, 00:28:06.305 "compare": false, 00:28:06.305 "compare_and_write": false, 00:28:06.305 "abort": true, 00:28:06.305 "seek_hole": false, 00:28:06.305 "seek_data": false, 00:28:06.305 "copy": true, 00:28:06.305 "nvme_iov_md": false 00:28:06.305 }, 00:28:06.305 "memory_domains": [ 00:28:06.305 { 00:28:06.305 "dma_device_id": "system", 00:28:06.305 "dma_device_type": 1 00:28:06.305 }, 00:28:06.305 { 00:28:06.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.305 "dma_device_type": 2 00:28:06.305 } 00:28:06.305 ], 00:28:06.305 "driver_specific": {} 00:28:06.305 } 00:28:06.305 ] 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.305 [2024-10-28 13:39:20.391987] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:06.305 [2024-10-28 13:39:20.392078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:06.305 [2024-10-28 13:39:20.392129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:06.305 [2024-10-28 13:39:20.395048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:06.305 [2024-10-28 13:39:20.395164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:06.305 "name": "Existed_Raid", 00:28:06.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.305 "strip_size_kb": 0, 00:28:06.305 "state": "configuring", 00:28:06.305 "raid_level": "raid1", 00:28:06.305 "superblock": false, 00:28:06.305 "num_base_bdevs": 4, 00:28:06.305 "num_base_bdevs_discovered": 3, 00:28:06.305 "num_base_bdevs_operational": 4, 00:28:06.305 "base_bdevs_list": [ 00:28:06.305 { 00:28:06.305 "name": "BaseBdev1", 00:28:06.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.305 "is_configured": false, 00:28:06.305 "data_offset": 0, 00:28:06.305 "data_size": 0 00:28:06.305 }, 00:28:06.305 { 00:28:06.305 "name": "BaseBdev2", 00:28:06.305 "uuid": "41511094-f1b0-4c60-9bcc-0de935387e68", 00:28:06.305 "is_configured": true, 00:28:06.305 "data_offset": 0, 00:28:06.305 "data_size": 65536 00:28:06.305 }, 00:28:06.305 { 00:28:06.305 "name": "BaseBdev3", 00:28:06.305 "uuid": "1520614e-eddd-4aaf-8c24-552211fc21a5", 00:28:06.305 "is_configured": true, 00:28:06.305 "data_offset": 0, 00:28:06.305 "data_size": 65536 00:28:06.305 }, 00:28:06.305 { 00:28:06.305 "name": "BaseBdev4", 00:28:06.305 "uuid": "2243f7a5-886c-4473-98a8-0b5fa9460e78", 00:28:06.305 "is_configured": true, 00:28:06.305 "data_offset": 0, 00:28:06.305 "data_size": 65536 00:28:06.305 } 00:28:06.305 ] 00:28:06.305 }' 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:06.305 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.870 [2024-10-28 13:39:20.972143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.870 13:39:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.128 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:07.128 "name": "Existed_Raid", 00:28:07.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.128 "strip_size_kb": 0, 00:28:07.128 "state": "configuring", 00:28:07.128 "raid_level": "raid1", 00:28:07.128 "superblock": false, 00:28:07.128 "num_base_bdevs": 4, 00:28:07.128 "num_base_bdevs_discovered": 2, 00:28:07.128 "num_base_bdevs_operational": 4, 00:28:07.128 "base_bdevs_list": [ 00:28:07.128 { 00:28:07.128 "name": "BaseBdev1", 00:28:07.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.128 "is_configured": false, 00:28:07.128 "data_offset": 0, 00:28:07.128 "data_size": 0 00:28:07.128 }, 00:28:07.128 { 00:28:07.128 "name": null, 00:28:07.128 "uuid": "41511094-f1b0-4c60-9bcc-0de935387e68", 00:28:07.128 "is_configured": false, 00:28:07.128 "data_offset": 0, 00:28:07.128 "data_size": 65536 00:28:07.128 }, 00:28:07.128 { 00:28:07.128 "name": "BaseBdev3", 00:28:07.128 "uuid": "1520614e-eddd-4aaf-8c24-552211fc21a5", 00:28:07.128 "is_configured": true, 00:28:07.128 "data_offset": 0, 00:28:07.128 "data_size": 65536 00:28:07.128 }, 00:28:07.128 { 00:28:07.128 "name": "BaseBdev4", 00:28:07.128 "uuid": "2243f7a5-886c-4473-98a8-0b5fa9460e78", 00:28:07.128 "is_configured": true, 00:28:07.128 "data_offset": 0, 00:28:07.128 "data_size": 65536 00:28:07.128 } 00:28:07.128 ] 00:28:07.128 }' 00:28:07.128 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:07.128 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.386 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.386 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.386 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.386 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:07.386 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.644 [2024-10-28 13:39:21.585976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:07.644 BaseBdev1 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.644 [ 00:28:07.644 { 00:28:07.644 "name": "BaseBdev1", 00:28:07.644 "aliases": [ 00:28:07.644 "2b072f26-a352-47cf-a1fa-1dd826e2a2cd" 00:28:07.644 ], 00:28:07.644 "product_name": "Malloc disk", 00:28:07.644 "block_size": 512, 00:28:07.644 "num_blocks": 65536, 00:28:07.644 "uuid": "2b072f26-a352-47cf-a1fa-1dd826e2a2cd", 00:28:07.644 "assigned_rate_limits": { 00:28:07.644 "rw_ios_per_sec": 0, 00:28:07.644 "rw_mbytes_per_sec": 0, 00:28:07.644 "r_mbytes_per_sec": 0, 00:28:07.644 "w_mbytes_per_sec": 0 00:28:07.644 }, 00:28:07.644 "claimed": true, 00:28:07.644 "claim_type": "exclusive_write", 00:28:07.644 "zoned": false, 00:28:07.644 "supported_io_types": { 00:28:07.644 "read": true, 00:28:07.644 "write": true, 00:28:07.644 "unmap": true, 00:28:07.644 "flush": true, 00:28:07.644 "reset": true, 00:28:07.644 "nvme_admin": false, 00:28:07.644 "nvme_io": false, 00:28:07.644 "nvme_io_md": false, 00:28:07.644 "write_zeroes": true, 00:28:07.644 "zcopy": true, 00:28:07.644 "get_zone_info": false, 00:28:07.644 "zone_management": false, 00:28:07.644 "zone_append": false, 00:28:07.644 "compare": false, 00:28:07.644 "compare_and_write": false, 00:28:07.644 "abort": true, 00:28:07.644 "seek_hole": false, 00:28:07.644 "seek_data": false, 00:28:07.644 "copy": true, 00:28:07.644 "nvme_iov_md": false 00:28:07.644 }, 00:28:07.644 "memory_domains": [ 00:28:07.644 { 00:28:07.644 "dma_device_id": "system", 00:28:07.644 "dma_device_type": 1 00:28:07.644 }, 00:28:07.644 { 00:28:07.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:07.644 "dma_device_type": 2 00:28:07.644 } 00:28:07.644 ], 00:28:07.644 "driver_specific": {} 00:28:07.644 } 00:28:07.644 ] 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:07.644 "name": "Existed_Raid", 00:28:07.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.644 "strip_size_kb": 0, 00:28:07.644 "state": "configuring", 00:28:07.644 "raid_level": "raid1", 00:28:07.644 "superblock": false, 00:28:07.644 "num_base_bdevs": 4, 00:28:07.644 "num_base_bdevs_discovered": 3, 00:28:07.644 "num_base_bdevs_operational": 4, 00:28:07.644 "base_bdevs_list": [ 00:28:07.644 { 00:28:07.644 "name": "BaseBdev1", 00:28:07.644 "uuid": "2b072f26-a352-47cf-a1fa-1dd826e2a2cd", 00:28:07.644 "is_configured": true, 00:28:07.644 "data_offset": 0, 00:28:07.644 "data_size": 65536 00:28:07.644 }, 00:28:07.644 { 00:28:07.644 "name": null, 00:28:07.644 "uuid": "41511094-f1b0-4c60-9bcc-0de935387e68", 00:28:07.644 "is_configured": false, 00:28:07.644 "data_offset": 0, 00:28:07.644 "data_size": 65536 00:28:07.644 }, 00:28:07.644 { 00:28:07.644 "name": "BaseBdev3", 00:28:07.644 "uuid": "1520614e-eddd-4aaf-8c24-552211fc21a5", 00:28:07.644 "is_configured": true, 00:28:07.644 "data_offset": 0, 00:28:07.644 "data_size": 65536 00:28:07.644 }, 00:28:07.644 { 00:28:07.644 "name": "BaseBdev4", 00:28:07.644 "uuid": "2243f7a5-886c-4473-98a8-0b5fa9460e78", 00:28:07.644 "is_configured": true, 00:28:07.644 "data_offset": 0, 00:28:07.644 "data_size": 65536 00:28:07.644 } 00:28:07.644 ] 00:28:07.644 }' 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:07.644 13:39:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.210 [2024-10-28 13:39:22.222269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:08.210 "name": "Existed_Raid", 00:28:08.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.210 "strip_size_kb": 0, 00:28:08.210 "state": "configuring", 00:28:08.210 "raid_level": "raid1", 00:28:08.210 "superblock": false, 00:28:08.210 "num_base_bdevs": 4, 00:28:08.210 "num_base_bdevs_discovered": 2, 00:28:08.210 "num_base_bdevs_operational": 4, 00:28:08.210 "base_bdevs_list": [ 00:28:08.210 { 00:28:08.210 "name": "BaseBdev1", 00:28:08.210 "uuid": "2b072f26-a352-47cf-a1fa-1dd826e2a2cd", 00:28:08.210 "is_configured": true, 00:28:08.210 "data_offset": 0, 00:28:08.210 "data_size": 65536 00:28:08.210 }, 00:28:08.210 { 00:28:08.210 "name": null, 00:28:08.210 "uuid": "41511094-f1b0-4c60-9bcc-0de935387e68", 00:28:08.210 "is_configured": false, 00:28:08.210 "data_offset": 0, 00:28:08.210 "data_size": 65536 00:28:08.210 }, 00:28:08.210 { 00:28:08.210 "name": null, 00:28:08.210 "uuid": "1520614e-eddd-4aaf-8c24-552211fc21a5", 00:28:08.210 "is_configured": false, 00:28:08.210 "data_offset": 0, 00:28:08.210 "data_size": 65536 00:28:08.210 }, 00:28:08.210 { 00:28:08.210 "name": "BaseBdev4", 00:28:08.210 "uuid": "2243f7a5-886c-4473-98a8-0b5fa9460e78", 00:28:08.210 "is_configured": true, 00:28:08.210 "data_offset": 0, 00:28:08.210 "data_size": 65536 00:28:08.210 } 00:28:08.210 ] 00:28:08.210 }' 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:08.210 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.774 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:08.774 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.774 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.774 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.774 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.775 [2024-10-28 13:39:22.778479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:08.775 "name": "Existed_Raid", 00:28:08.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.775 "strip_size_kb": 0, 00:28:08.775 "state": "configuring", 00:28:08.775 "raid_level": "raid1", 00:28:08.775 "superblock": false, 00:28:08.775 "num_base_bdevs": 4, 00:28:08.775 "num_base_bdevs_discovered": 3, 00:28:08.775 "num_base_bdevs_operational": 4, 00:28:08.775 "base_bdevs_list": [ 00:28:08.775 { 00:28:08.775 "name": "BaseBdev1", 00:28:08.775 "uuid": "2b072f26-a352-47cf-a1fa-1dd826e2a2cd", 00:28:08.775 "is_configured": true, 00:28:08.775 "data_offset": 0, 00:28:08.775 "data_size": 65536 00:28:08.775 }, 00:28:08.775 { 00:28:08.775 "name": null, 00:28:08.775 "uuid": "41511094-f1b0-4c60-9bcc-0de935387e68", 00:28:08.775 "is_configured": false, 00:28:08.775 "data_offset": 0, 00:28:08.775 "data_size": 65536 00:28:08.775 }, 00:28:08.775 { 00:28:08.775 "name": "BaseBdev3", 00:28:08.775 "uuid": "1520614e-eddd-4aaf-8c24-552211fc21a5", 00:28:08.775 "is_configured": true, 00:28:08.775 "data_offset": 0, 00:28:08.775 "data_size": 65536 00:28:08.775 }, 00:28:08.775 { 00:28:08.775 "name": "BaseBdev4", 00:28:08.775 "uuid": "2243f7a5-886c-4473-98a8-0b5fa9460e78", 00:28:08.775 "is_configured": true, 00:28:08.775 "data_offset": 0, 00:28:08.775 "data_size": 65536 00:28:08.775 } 00:28:08.775 ] 00:28:08.775 }' 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:08.775 13:39:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.339 [2024-10-28 13:39:23.326696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:09.339 "name": "Existed_Raid", 00:28:09.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.339 "strip_size_kb": 0, 00:28:09.339 "state": "configuring", 00:28:09.339 "raid_level": "raid1", 00:28:09.339 "superblock": false, 00:28:09.339 "num_base_bdevs": 4, 00:28:09.339 "num_base_bdevs_discovered": 2, 00:28:09.339 "num_base_bdevs_operational": 4, 00:28:09.339 "base_bdevs_list": [ 00:28:09.339 { 00:28:09.339 "name": null, 00:28:09.339 "uuid": "2b072f26-a352-47cf-a1fa-1dd826e2a2cd", 00:28:09.339 "is_configured": false, 00:28:09.339 "data_offset": 0, 00:28:09.339 "data_size": 65536 00:28:09.339 }, 00:28:09.339 { 00:28:09.339 "name": null, 00:28:09.339 "uuid": "41511094-f1b0-4c60-9bcc-0de935387e68", 00:28:09.339 "is_configured": false, 00:28:09.339 "data_offset": 0, 00:28:09.339 "data_size": 65536 00:28:09.339 }, 00:28:09.339 { 00:28:09.339 "name": "BaseBdev3", 00:28:09.339 "uuid": "1520614e-eddd-4aaf-8c24-552211fc21a5", 00:28:09.339 "is_configured": true, 00:28:09.339 "data_offset": 0, 00:28:09.339 "data_size": 65536 00:28:09.339 }, 00:28:09.339 { 00:28:09.339 "name": "BaseBdev4", 00:28:09.339 "uuid": "2243f7a5-886c-4473-98a8-0b5fa9460e78", 00:28:09.339 "is_configured": true, 00:28:09.339 "data_offset": 0, 00:28:09.339 "data_size": 65536 00:28:09.339 } 00:28:09.339 ] 00:28:09.339 }' 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:09.339 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.905 [2024-10-28 13:39:23.893921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:09.905 "name": "Existed_Raid", 00:28:09.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.905 "strip_size_kb": 0, 00:28:09.905 "state": "configuring", 00:28:09.905 "raid_level": "raid1", 00:28:09.905 "superblock": false, 00:28:09.905 "num_base_bdevs": 4, 00:28:09.905 "num_base_bdevs_discovered": 3, 00:28:09.905 "num_base_bdevs_operational": 4, 00:28:09.905 "base_bdevs_list": [ 00:28:09.905 { 00:28:09.905 "name": null, 00:28:09.905 "uuid": "2b072f26-a352-47cf-a1fa-1dd826e2a2cd", 00:28:09.905 "is_configured": false, 00:28:09.905 "data_offset": 0, 00:28:09.905 "data_size": 65536 00:28:09.905 }, 00:28:09.905 { 00:28:09.905 "name": "BaseBdev2", 00:28:09.905 "uuid": "41511094-f1b0-4c60-9bcc-0de935387e68", 00:28:09.905 "is_configured": true, 00:28:09.905 "data_offset": 0, 00:28:09.905 "data_size": 65536 00:28:09.905 }, 00:28:09.905 { 00:28:09.905 "name": "BaseBdev3", 00:28:09.905 "uuid": "1520614e-eddd-4aaf-8c24-552211fc21a5", 00:28:09.905 "is_configured": true, 00:28:09.905 "data_offset": 0, 00:28:09.905 "data_size": 65536 00:28:09.905 }, 00:28:09.905 { 00:28:09.905 "name": "BaseBdev4", 00:28:09.905 "uuid": "2243f7a5-886c-4473-98a8-0b5fa9460e78", 00:28:09.905 "is_configured": true, 00:28:09.905 "data_offset": 0, 00:28:09.905 "data_size": 65536 00:28:09.905 } 00:28:09.905 ] 00:28:09.905 }' 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:09.905 13:39:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2b072f26-a352-47cf-a1fa-1dd826e2a2cd 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.472 [2024-10-28 13:39:24.529134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:10.472 [2024-10-28 13:39:24.529235] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:10.472 [2024-10-28 13:39:24.529249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:10.472 [2024-10-28 13:39:24.529580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:28:10.472 [2024-10-28 13:39:24.529752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:10.472 NewBaseBdev 00:28:10.472 [2024-10-28 13:39:24.529981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:10.472 [2024-10-28 13:39:24.530267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:28:10.472 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.473 [ 00:28:10.473 { 00:28:10.473 "name": "NewBaseBdev", 00:28:10.473 "aliases": [ 00:28:10.473 "2b072f26-a352-47cf-a1fa-1dd826e2a2cd" 00:28:10.473 ], 00:28:10.473 "product_name": "Malloc disk", 00:28:10.473 "block_size": 512, 00:28:10.473 "num_blocks": 65536, 00:28:10.473 "uuid": "2b072f26-a352-47cf-a1fa-1dd826e2a2cd", 00:28:10.473 "assigned_rate_limits": { 00:28:10.473 "rw_ios_per_sec": 0, 00:28:10.473 "rw_mbytes_per_sec": 0, 00:28:10.473 "r_mbytes_per_sec": 0, 00:28:10.473 "w_mbytes_per_sec": 0 00:28:10.473 }, 00:28:10.473 "claimed": true, 00:28:10.473 "claim_type": "exclusive_write", 00:28:10.473 "zoned": false, 00:28:10.473 "supported_io_types": { 00:28:10.473 "read": true, 00:28:10.473 "write": true, 00:28:10.473 "unmap": true, 00:28:10.473 "flush": true, 00:28:10.473 "reset": true, 00:28:10.473 "nvme_admin": false, 00:28:10.473 "nvme_io": false, 00:28:10.473 "nvme_io_md": false, 00:28:10.473 "write_zeroes": true, 00:28:10.473 "zcopy": true, 00:28:10.473 "get_zone_info": false, 00:28:10.473 "zone_management": false, 00:28:10.473 "zone_append": false, 00:28:10.473 "compare": false, 00:28:10.473 "compare_and_write": false, 00:28:10.473 "abort": true, 00:28:10.473 "seek_hole": false, 00:28:10.473 "seek_data": false, 00:28:10.473 "copy": true, 00:28:10.473 "nvme_iov_md": false 00:28:10.473 }, 00:28:10.473 "memory_domains": [ 00:28:10.473 { 00:28:10.473 "dma_device_id": "system", 00:28:10.473 "dma_device_type": 1 00:28:10.473 }, 00:28:10.473 { 00:28:10.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.473 "dma_device_type": 2 00:28:10.473 } 00:28:10.473 ], 00:28:10.473 "driver_specific": {} 00:28:10.473 } 00:28:10.473 ] 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:10.473 "name": "Existed_Raid", 00:28:10.473 "uuid": "f225a1d3-07d9-46d0-b075-452373d55204", 00:28:10.473 "strip_size_kb": 0, 00:28:10.473 "state": "online", 00:28:10.473 "raid_level": "raid1", 00:28:10.473 "superblock": false, 00:28:10.473 "num_base_bdevs": 4, 00:28:10.473 "num_base_bdevs_discovered": 4, 00:28:10.473 "num_base_bdevs_operational": 4, 00:28:10.473 "base_bdevs_list": [ 00:28:10.473 { 00:28:10.473 "name": "NewBaseBdev", 00:28:10.473 "uuid": "2b072f26-a352-47cf-a1fa-1dd826e2a2cd", 00:28:10.473 "is_configured": true, 00:28:10.473 "data_offset": 0, 00:28:10.473 "data_size": 65536 00:28:10.473 }, 00:28:10.473 { 00:28:10.473 "name": "BaseBdev2", 00:28:10.473 "uuid": "41511094-f1b0-4c60-9bcc-0de935387e68", 00:28:10.473 "is_configured": true, 00:28:10.473 "data_offset": 0, 00:28:10.473 "data_size": 65536 00:28:10.473 }, 00:28:10.473 { 00:28:10.473 "name": "BaseBdev3", 00:28:10.473 "uuid": "1520614e-eddd-4aaf-8c24-552211fc21a5", 00:28:10.473 "is_configured": true, 00:28:10.473 "data_offset": 0, 00:28:10.473 "data_size": 65536 00:28:10.473 }, 00:28:10.473 { 00:28:10.473 "name": "BaseBdev4", 00:28:10.473 "uuid": "2243f7a5-886c-4473-98a8-0b5fa9460e78", 00:28:10.473 "is_configured": true, 00:28:10.473 "data_offset": 0, 00:28:10.473 "data_size": 65536 00:28:10.473 } 00:28:10.473 ] 00:28:10.473 }' 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:10.473 13:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.041 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:28:11.041 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:11.041 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:11.041 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:11.041 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:11.041 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:11.041 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:11.041 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.041 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.041 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:11.041 [2024-10-28 13:39:25.105889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.041 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.041 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:11.041 "name": "Existed_Raid", 00:28:11.041 "aliases": [ 00:28:11.041 "f225a1d3-07d9-46d0-b075-452373d55204" 00:28:11.041 ], 00:28:11.041 "product_name": "Raid Volume", 00:28:11.041 "block_size": 512, 00:28:11.041 "num_blocks": 65536, 00:28:11.041 "uuid": "f225a1d3-07d9-46d0-b075-452373d55204", 00:28:11.041 "assigned_rate_limits": { 00:28:11.041 "rw_ios_per_sec": 0, 00:28:11.041 "rw_mbytes_per_sec": 0, 00:28:11.041 "r_mbytes_per_sec": 0, 00:28:11.041 "w_mbytes_per_sec": 0 00:28:11.041 }, 00:28:11.041 "claimed": false, 00:28:11.041 "zoned": false, 00:28:11.041 "supported_io_types": { 00:28:11.041 "read": true, 00:28:11.041 "write": true, 00:28:11.041 "unmap": false, 00:28:11.041 "flush": false, 00:28:11.041 "reset": true, 00:28:11.041 "nvme_admin": false, 00:28:11.041 "nvme_io": false, 00:28:11.041 "nvme_io_md": false, 00:28:11.041 "write_zeroes": true, 00:28:11.041 "zcopy": false, 00:28:11.041 "get_zone_info": false, 00:28:11.041 "zone_management": false, 00:28:11.041 "zone_append": false, 00:28:11.041 "compare": false, 00:28:11.041 "compare_and_write": false, 00:28:11.041 "abort": false, 00:28:11.041 "seek_hole": false, 00:28:11.041 "seek_data": false, 00:28:11.041 "copy": false, 00:28:11.041 "nvme_iov_md": false 00:28:11.041 }, 00:28:11.041 "memory_domains": [ 00:28:11.041 { 00:28:11.041 "dma_device_id": "system", 00:28:11.041 "dma_device_type": 1 00:28:11.041 }, 00:28:11.041 { 00:28:11.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.041 "dma_device_type": 2 00:28:11.041 }, 00:28:11.041 { 00:28:11.041 "dma_device_id": "system", 00:28:11.041 "dma_device_type": 1 00:28:11.041 }, 00:28:11.041 { 00:28:11.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.041 "dma_device_type": 2 00:28:11.041 }, 00:28:11.041 { 00:28:11.041 "dma_device_id": "system", 00:28:11.041 "dma_device_type": 1 00:28:11.041 }, 00:28:11.041 { 00:28:11.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.041 "dma_device_type": 2 00:28:11.041 }, 00:28:11.041 { 00:28:11.041 "dma_device_id": "system", 00:28:11.041 "dma_device_type": 1 00:28:11.041 }, 00:28:11.041 { 00:28:11.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:11.041 "dma_device_type": 2 00:28:11.041 } 00:28:11.041 ], 00:28:11.041 "driver_specific": { 00:28:11.041 "raid": { 00:28:11.041 "uuid": "f225a1d3-07d9-46d0-b075-452373d55204", 00:28:11.041 "strip_size_kb": 0, 00:28:11.041 "state": "online", 00:28:11.041 "raid_level": "raid1", 00:28:11.041 "superblock": false, 00:28:11.042 "num_base_bdevs": 4, 00:28:11.042 "num_base_bdevs_discovered": 4, 00:28:11.042 "num_base_bdevs_operational": 4, 00:28:11.042 "base_bdevs_list": [ 00:28:11.042 { 00:28:11.042 "name": "NewBaseBdev", 00:28:11.042 "uuid": "2b072f26-a352-47cf-a1fa-1dd826e2a2cd", 00:28:11.042 "is_configured": true, 00:28:11.042 "data_offset": 0, 00:28:11.042 "data_size": 65536 00:28:11.042 }, 00:28:11.042 { 00:28:11.042 "name": "BaseBdev2", 00:28:11.042 "uuid": "41511094-f1b0-4c60-9bcc-0de935387e68", 00:28:11.042 "is_configured": true, 00:28:11.042 "data_offset": 0, 00:28:11.042 "data_size": 65536 00:28:11.042 }, 00:28:11.042 { 00:28:11.042 "name": "BaseBdev3", 00:28:11.042 "uuid": "1520614e-eddd-4aaf-8c24-552211fc21a5", 00:28:11.042 "is_configured": true, 00:28:11.042 "data_offset": 0, 00:28:11.042 "data_size": 65536 00:28:11.042 }, 00:28:11.042 { 00:28:11.042 "name": "BaseBdev4", 00:28:11.042 "uuid": "2243f7a5-886c-4473-98a8-0b5fa9460e78", 00:28:11.042 "is_configured": true, 00:28:11.042 "data_offset": 0, 00:28:11.042 "data_size": 65536 00:28:11.042 } 00:28:11.042 ] 00:28:11.042 } 00:28:11.042 } 00:28:11.042 }' 00:28:11.042 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:28:11.301 BaseBdev2 00:28:11.301 BaseBdev3 00:28:11.301 BaseBdev4' 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.301 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.560 [2024-10-28 13:39:25.485550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:11.560 [2024-10-28 13:39:25.485628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:11.560 [2024-10-28 13:39:25.485758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:11.560 [2024-10-28 13:39:25.486154] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:11.560 [2024-10-28 13:39:25.486192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 85904 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 85904 ']' 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 85904 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85904 00:28:11.560 killing process with pid 85904 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85904' 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 85904 00:28:11.560 13:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 85904 00:28:11.560 [2024-10-28 13:39:25.525298] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:11.560 [2024-10-28 13:39:25.631724] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:12.134 13:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:28:12.134 00:28:12.134 real 0m11.666s 00:28:12.134 user 0m20.211s 00:28:12.134 sys 0m1.916s 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:12.135 ************************************ 00:28:12.135 END TEST raid_state_function_test 00:28:12.135 ************************************ 00:28:12.135 13:39:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:28:12.135 13:39:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:28:12.135 13:39:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:12.135 13:39:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:12.135 ************************************ 00:28:12.135 START TEST raid_state_function_test_sb 00:28:12.135 ************************************ 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:28:12.135 Process raid pid: 86578 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=86578 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86578' 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 86578 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86578 ']' 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:12.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:12.135 13:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:12.135 [2024-10-28 13:39:26.248684] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:28:12.135 [2024-10-28 13:39:26.250015] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.393 [2024-10-28 13:39:26.409181] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:12.393 [2024-10-28 13:39:26.440196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.393 [2024-10-28 13:39:26.520665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.652 [2024-10-28 13:39:26.614499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:12.652 [2024-10-28 13:39:26.614560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.219 [2024-10-28 13:39:27.324696] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:13.219 [2024-10-28 13:39:27.324818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:13.219 [2024-10-28 13:39:27.324839] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:13.219 [2024-10-28 13:39:27.324852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:13.219 [2024-10-28 13:39:27.324868] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:13.219 [2024-10-28 13:39:27.324880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:13.219 [2024-10-28 13:39:27.324895] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:13.219 [2024-10-28 13:39:27.324906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.219 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.479 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:13.479 "name": "Existed_Raid", 00:28:13.479 "uuid": "8e4db7d7-7cff-4e37-9daa-1cc99bdde90b", 00:28:13.479 "strip_size_kb": 0, 00:28:13.479 "state": "configuring", 00:28:13.479 "raid_level": "raid1", 00:28:13.479 "superblock": true, 00:28:13.479 "num_base_bdevs": 4, 00:28:13.479 "num_base_bdevs_discovered": 0, 00:28:13.479 "num_base_bdevs_operational": 4, 00:28:13.479 "base_bdevs_list": [ 00:28:13.479 { 00:28:13.479 "name": "BaseBdev1", 00:28:13.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.479 "is_configured": false, 00:28:13.479 "data_offset": 0, 00:28:13.479 "data_size": 0 00:28:13.479 }, 00:28:13.479 { 00:28:13.479 "name": "BaseBdev2", 00:28:13.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.479 "is_configured": false, 00:28:13.479 "data_offset": 0, 00:28:13.479 "data_size": 0 00:28:13.479 }, 00:28:13.479 { 00:28:13.479 "name": "BaseBdev3", 00:28:13.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.479 "is_configured": false, 00:28:13.479 "data_offset": 0, 00:28:13.479 "data_size": 0 00:28:13.479 }, 00:28:13.479 { 00:28:13.479 "name": "BaseBdev4", 00:28:13.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.479 "is_configured": false, 00:28:13.479 "data_offset": 0, 00:28:13.479 "data_size": 0 00:28:13.479 } 00:28:13.479 ] 00:28:13.479 }' 00:28:13.479 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:13.479 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.738 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:13.738 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.738 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.738 [2024-10-28 13:39:27.892830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:13.739 [2024-10-28 13:39:27.893290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.998 [2024-10-28 13:39:27.904796] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:13.998 [2024-10-28 13:39:27.904846] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:13.998 [2024-10-28 13:39:27.904864] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:13.998 [2024-10-28 13:39:27.904877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:13.998 [2024-10-28 13:39:27.904888] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:13.998 [2024-10-28 13:39:27.904898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:13.998 [2024-10-28 13:39:27.904910] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:13.998 [2024-10-28 13:39:27.904920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.998 [2024-10-28 13:39:27.936613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:13.998 BaseBdev1 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.998 [ 00:28:13.998 { 00:28:13.998 "name": "BaseBdev1", 00:28:13.998 "aliases": [ 00:28:13.998 "bbe4bed8-2eb4-441b-9fc8-4d6dde99633b" 00:28:13.998 ], 00:28:13.998 "product_name": "Malloc disk", 00:28:13.998 "block_size": 512, 00:28:13.998 "num_blocks": 65536, 00:28:13.998 "uuid": "bbe4bed8-2eb4-441b-9fc8-4d6dde99633b", 00:28:13.998 "assigned_rate_limits": { 00:28:13.998 "rw_ios_per_sec": 0, 00:28:13.998 "rw_mbytes_per_sec": 0, 00:28:13.998 "r_mbytes_per_sec": 0, 00:28:13.998 "w_mbytes_per_sec": 0 00:28:13.998 }, 00:28:13.998 "claimed": true, 00:28:13.998 "claim_type": "exclusive_write", 00:28:13.998 "zoned": false, 00:28:13.998 "supported_io_types": { 00:28:13.998 "read": true, 00:28:13.998 "write": true, 00:28:13.998 "unmap": true, 00:28:13.998 "flush": true, 00:28:13.998 "reset": true, 00:28:13.998 "nvme_admin": false, 00:28:13.998 "nvme_io": false, 00:28:13.998 "nvme_io_md": false, 00:28:13.998 "write_zeroes": true, 00:28:13.998 "zcopy": true, 00:28:13.998 "get_zone_info": false, 00:28:13.998 "zone_management": false, 00:28:13.998 "zone_append": false, 00:28:13.998 "compare": false, 00:28:13.998 "compare_and_write": false, 00:28:13.998 "abort": true, 00:28:13.998 "seek_hole": false, 00:28:13.998 "seek_data": false, 00:28:13.998 "copy": true, 00:28:13.998 "nvme_iov_md": false 00:28:13.998 }, 00:28:13.998 "memory_domains": [ 00:28:13.998 { 00:28:13.998 "dma_device_id": "system", 00:28:13.998 "dma_device_type": 1 00:28:13.998 }, 00:28:13.998 { 00:28:13.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:13.998 "dma_device_type": 2 00:28:13.998 } 00:28:13.998 ], 00:28:13.998 "driver_specific": {} 00:28:13.998 } 00:28:13.998 ] 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.998 13:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.998 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:13.998 "name": "Existed_Raid", 00:28:13.998 "uuid": "679033c7-49f9-45dd-8bd8-ccdc6756c087", 00:28:13.998 "strip_size_kb": 0, 00:28:13.998 "state": "configuring", 00:28:13.998 "raid_level": "raid1", 00:28:13.998 "superblock": true, 00:28:13.998 "num_base_bdevs": 4, 00:28:13.998 "num_base_bdevs_discovered": 1, 00:28:13.998 "num_base_bdevs_operational": 4, 00:28:13.998 "base_bdevs_list": [ 00:28:13.998 { 00:28:13.998 "name": "BaseBdev1", 00:28:13.998 "uuid": "bbe4bed8-2eb4-441b-9fc8-4d6dde99633b", 00:28:13.998 "is_configured": true, 00:28:13.998 "data_offset": 2048, 00:28:13.998 "data_size": 63488 00:28:13.998 }, 00:28:13.998 { 00:28:13.998 "name": "BaseBdev2", 00:28:13.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.998 "is_configured": false, 00:28:13.998 "data_offset": 0, 00:28:13.998 "data_size": 0 00:28:13.998 }, 00:28:13.998 { 00:28:13.998 "name": "BaseBdev3", 00:28:13.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.998 "is_configured": false, 00:28:13.998 "data_offset": 0, 00:28:13.998 "data_size": 0 00:28:13.998 }, 00:28:13.998 { 00:28:13.998 "name": "BaseBdev4", 00:28:13.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.998 "is_configured": false, 00:28:13.998 "data_offset": 0, 00:28:13.998 "data_size": 0 00:28:13.998 } 00:28:13.998 ] 00:28:13.998 }' 00:28:13.998 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:13.998 13:39:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.565 [2024-10-28 13:39:28.541101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:14.565 [2024-10-28 13:39:28.541248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.565 [2024-10-28 13:39:28.553074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:14.565 [2024-10-28 13:39:28.556131] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:14.565 [2024-10-28 13:39:28.556324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:14.565 [2024-10-28 13:39:28.556454] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:14.565 [2024-10-28 13:39:28.556604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:14.565 [2024-10-28 13:39:28.556726] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:14.565 [2024-10-28 13:39:28.556843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:14.565 "name": "Existed_Raid", 00:28:14.565 "uuid": "9e30e5c6-ada0-4c4c-9eae-e5593108622d", 00:28:14.565 "strip_size_kb": 0, 00:28:14.565 "state": "configuring", 00:28:14.565 "raid_level": "raid1", 00:28:14.565 "superblock": true, 00:28:14.565 "num_base_bdevs": 4, 00:28:14.565 "num_base_bdevs_discovered": 1, 00:28:14.565 "num_base_bdevs_operational": 4, 00:28:14.565 "base_bdevs_list": [ 00:28:14.565 { 00:28:14.565 "name": "BaseBdev1", 00:28:14.565 "uuid": "bbe4bed8-2eb4-441b-9fc8-4d6dde99633b", 00:28:14.565 "is_configured": true, 00:28:14.565 "data_offset": 2048, 00:28:14.565 "data_size": 63488 00:28:14.565 }, 00:28:14.565 { 00:28:14.565 "name": "BaseBdev2", 00:28:14.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.565 "is_configured": false, 00:28:14.565 "data_offset": 0, 00:28:14.565 "data_size": 0 00:28:14.565 }, 00:28:14.565 { 00:28:14.565 "name": "BaseBdev3", 00:28:14.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.565 "is_configured": false, 00:28:14.565 "data_offset": 0, 00:28:14.565 "data_size": 0 00:28:14.565 }, 00:28:14.565 { 00:28:14.565 "name": "BaseBdev4", 00:28:14.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.565 "is_configured": false, 00:28:14.565 "data_offset": 0, 00:28:14.565 "data_size": 0 00:28:14.565 } 00:28:14.565 ] 00:28:14.565 }' 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:14.565 13:39:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.131 [2024-10-28 13:39:29.132871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:15.131 BaseBdev2 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.131 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.131 [ 00:28:15.131 { 00:28:15.131 "name": "BaseBdev2", 00:28:15.132 "aliases": [ 00:28:15.132 "295943f6-af51-436e-be20-a376ae2643dc" 00:28:15.132 ], 00:28:15.132 "product_name": "Malloc disk", 00:28:15.132 "block_size": 512, 00:28:15.132 "num_blocks": 65536, 00:28:15.132 "uuid": "295943f6-af51-436e-be20-a376ae2643dc", 00:28:15.132 "assigned_rate_limits": { 00:28:15.132 "rw_ios_per_sec": 0, 00:28:15.132 "rw_mbytes_per_sec": 0, 00:28:15.132 "r_mbytes_per_sec": 0, 00:28:15.132 "w_mbytes_per_sec": 0 00:28:15.132 }, 00:28:15.132 "claimed": true, 00:28:15.132 "claim_type": "exclusive_write", 00:28:15.132 "zoned": false, 00:28:15.132 "supported_io_types": { 00:28:15.132 "read": true, 00:28:15.132 "write": true, 00:28:15.132 "unmap": true, 00:28:15.132 "flush": true, 00:28:15.132 "reset": true, 00:28:15.132 "nvme_admin": false, 00:28:15.132 "nvme_io": false, 00:28:15.132 "nvme_io_md": false, 00:28:15.132 "write_zeroes": true, 00:28:15.132 "zcopy": true, 00:28:15.132 "get_zone_info": false, 00:28:15.132 "zone_management": false, 00:28:15.132 "zone_append": false, 00:28:15.132 "compare": false, 00:28:15.132 "compare_and_write": false, 00:28:15.132 "abort": true, 00:28:15.132 "seek_hole": false, 00:28:15.132 "seek_data": false, 00:28:15.132 "copy": true, 00:28:15.132 "nvme_iov_md": false 00:28:15.132 }, 00:28:15.132 "memory_domains": [ 00:28:15.132 { 00:28:15.132 "dma_device_id": "system", 00:28:15.132 "dma_device_type": 1 00:28:15.132 }, 00:28:15.132 { 00:28:15.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.132 "dma_device_type": 2 00:28:15.132 } 00:28:15.132 ], 00:28:15.132 "driver_specific": {} 00:28:15.132 } 00:28:15.132 ] 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:15.132 "name": "Existed_Raid", 00:28:15.132 "uuid": "9e30e5c6-ada0-4c4c-9eae-e5593108622d", 00:28:15.132 "strip_size_kb": 0, 00:28:15.132 "state": "configuring", 00:28:15.132 "raid_level": "raid1", 00:28:15.132 "superblock": true, 00:28:15.132 "num_base_bdevs": 4, 00:28:15.132 "num_base_bdevs_discovered": 2, 00:28:15.132 "num_base_bdevs_operational": 4, 00:28:15.132 "base_bdevs_list": [ 00:28:15.132 { 00:28:15.132 "name": "BaseBdev1", 00:28:15.132 "uuid": "bbe4bed8-2eb4-441b-9fc8-4d6dde99633b", 00:28:15.132 "is_configured": true, 00:28:15.132 "data_offset": 2048, 00:28:15.132 "data_size": 63488 00:28:15.132 }, 00:28:15.132 { 00:28:15.132 "name": "BaseBdev2", 00:28:15.132 "uuid": "295943f6-af51-436e-be20-a376ae2643dc", 00:28:15.132 "is_configured": true, 00:28:15.132 "data_offset": 2048, 00:28:15.132 "data_size": 63488 00:28:15.132 }, 00:28:15.132 { 00:28:15.132 "name": "BaseBdev3", 00:28:15.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.132 "is_configured": false, 00:28:15.132 "data_offset": 0, 00:28:15.132 "data_size": 0 00:28:15.132 }, 00:28:15.132 { 00:28:15.132 "name": "BaseBdev4", 00:28:15.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.132 "is_configured": false, 00:28:15.132 "data_offset": 0, 00:28:15.132 "data_size": 0 00:28:15.132 } 00:28:15.132 ] 00:28:15.132 }' 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:15.132 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.706 [2024-10-28 13:39:29.720527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:15.706 BaseBdev3 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.706 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.707 [ 00:28:15.707 { 00:28:15.707 "name": "BaseBdev3", 00:28:15.707 "aliases": [ 00:28:15.707 "27424149-e136-4d75-ae8c-3ab72bcc77a2" 00:28:15.707 ], 00:28:15.707 "product_name": "Malloc disk", 00:28:15.707 "block_size": 512, 00:28:15.707 "num_blocks": 65536, 00:28:15.707 "uuid": "27424149-e136-4d75-ae8c-3ab72bcc77a2", 00:28:15.707 "assigned_rate_limits": { 00:28:15.707 "rw_ios_per_sec": 0, 00:28:15.707 "rw_mbytes_per_sec": 0, 00:28:15.707 "r_mbytes_per_sec": 0, 00:28:15.707 "w_mbytes_per_sec": 0 00:28:15.707 }, 00:28:15.707 "claimed": true, 00:28:15.707 "claim_type": "exclusive_write", 00:28:15.707 "zoned": false, 00:28:15.707 "supported_io_types": { 00:28:15.707 "read": true, 00:28:15.707 "write": true, 00:28:15.707 "unmap": true, 00:28:15.707 "flush": true, 00:28:15.707 "reset": true, 00:28:15.707 "nvme_admin": false, 00:28:15.707 "nvme_io": false, 00:28:15.707 "nvme_io_md": false, 00:28:15.707 "write_zeroes": true, 00:28:15.707 "zcopy": true, 00:28:15.707 "get_zone_info": false, 00:28:15.707 "zone_management": false, 00:28:15.707 "zone_append": false, 00:28:15.707 "compare": false, 00:28:15.707 "compare_and_write": false, 00:28:15.707 "abort": true, 00:28:15.707 "seek_hole": false, 00:28:15.707 "seek_data": false, 00:28:15.707 "copy": true, 00:28:15.707 "nvme_iov_md": false 00:28:15.707 }, 00:28:15.707 "memory_domains": [ 00:28:15.707 { 00:28:15.707 "dma_device_id": "system", 00:28:15.707 "dma_device_type": 1 00:28:15.707 }, 00:28:15.707 { 00:28:15.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.707 "dma_device_type": 2 00:28:15.707 } 00:28:15.707 ], 00:28:15.707 "driver_specific": {} 00:28:15.707 } 00:28:15.707 ] 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:15.707 "name": "Existed_Raid", 00:28:15.707 "uuid": "9e30e5c6-ada0-4c4c-9eae-e5593108622d", 00:28:15.707 "strip_size_kb": 0, 00:28:15.707 "state": "configuring", 00:28:15.707 "raid_level": "raid1", 00:28:15.707 "superblock": true, 00:28:15.707 "num_base_bdevs": 4, 00:28:15.707 "num_base_bdevs_discovered": 3, 00:28:15.707 "num_base_bdevs_operational": 4, 00:28:15.707 "base_bdevs_list": [ 00:28:15.707 { 00:28:15.707 "name": "BaseBdev1", 00:28:15.707 "uuid": "bbe4bed8-2eb4-441b-9fc8-4d6dde99633b", 00:28:15.707 "is_configured": true, 00:28:15.707 "data_offset": 2048, 00:28:15.707 "data_size": 63488 00:28:15.707 }, 00:28:15.707 { 00:28:15.707 "name": "BaseBdev2", 00:28:15.707 "uuid": "295943f6-af51-436e-be20-a376ae2643dc", 00:28:15.707 "is_configured": true, 00:28:15.707 "data_offset": 2048, 00:28:15.707 "data_size": 63488 00:28:15.707 }, 00:28:15.707 { 00:28:15.707 "name": "BaseBdev3", 00:28:15.707 "uuid": "27424149-e136-4d75-ae8c-3ab72bcc77a2", 00:28:15.707 "is_configured": true, 00:28:15.707 "data_offset": 2048, 00:28:15.707 "data_size": 63488 00:28:15.707 }, 00:28:15.707 { 00:28:15.707 "name": "BaseBdev4", 00:28:15.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.707 "is_configured": false, 00:28:15.707 "data_offset": 0, 00:28:15.707 "data_size": 0 00:28:15.707 } 00:28:15.707 ] 00:28:15.707 }' 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:15.707 13:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.278 [2024-10-28 13:39:30.296862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:16.278 [2024-10-28 13:39:30.297689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:16.278 [2024-10-28 13:39:30.297735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:16.278 BaseBdev4 00:28:16.278 [2024-10-28 13:39:30.298144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:28:16.278 [2024-10-28 13:39:30.298398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:16.278 [2024-10-28 13:39:30.298447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:28:16.278 [2024-10-28 13:39:30.298627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.278 [ 00:28:16.278 { 00:28:16.278 "name": "BaseBdev4", 00:28:16.278 "aliases": [ 00:28:16.278 "9b812564-1c29-49a2-bd08-100d7b19050e" 00:28:16.278 ], 00:28:16.278 "product_name": "Malloc disk", 00:28:16.278 "block_size": 512, 00:28:16.278 "num_blocks": 65536, 00:28:16.278 "uuid": "9b812564-1c29-49a2-bd08-100d7b19050e", 00:28:16.278 "assigned_rate_limits": { 00:28:16.278 "rw_ios_per_sec": 0, 00:28:16.278 "rw_mbytes_per_sec": 0, 00:28:16.278 "r_mbytes_per_sec": 0, 00:28:16.278 "w_mbytes_per_sec": 0 00:28:16.278 }, 00:28:16.278 "claimed": true, 00:28:16.278 "claim_type": "exclusive_write", 00:28:16.278 "zoned": false, 00:28:16.278 "supported_io_types": { 00:28:16.278 "read": true, 00:28:16.278 "write": true, 00:28:16.278 "unmap": true, 00:28:16.278 "flush": true, 00:28:16.278 "reset": true, 00:28:16.278 "nvme_admin": false, 00:28:16.278 "nvme_io": false, 00:28:16.278 "nvme_io_md": false, 00:28:16.278 "write_zeroes": true, 00:28:16.278 "zcopy": true, 00:28:16.278 "get_zone_info": false, 00:28:16.278 "zone_management": false, 00:28:16.278 "zone_append": false, 00:28:16.278 "compare": false, 00:28:16.278 "compare_and_write": false, 00:28:16.278 "abort": true, 00:28:16.278 "seek_hole": false, 00:28:16.278 "seek_data": false, 00:28:16.278 "copy": true, 00:28:16.278 "nvme_iov_md": false 00:28:16.278 }, 00:28:16.278 "memory_domains": [ 00:28:16.278 { 00:28:16.278 "dma_device_id": "system", 00:28:16.278 "dma_device_type": 1 00:28:16.278 }, 00:28:16.278 { 00:28:16.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.278 "dma_device_type": 2 00:28:16.278 } 00:28:16.278 ], 00:28:16.278 "driver_specific": {} 00:28:16.278 } 00:28:16.278 ] 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:16.278 "name": "Existed_Raid", 00:28:16.278 "uuid": "9e30e5c6-ada0-4c4c-9eae-e5593108622d", 00:28:16.278 "strip_size_kb": 0, 00:28:16.278 "state": "online", 00:28:16.278 "raid_level": "raid1", 00:28:16.278 "superblock": true, 00:28:16.278 "num_base_bdevs": 4, 00:28:16.278 "num_base_bdevs_discovered": 4, 00:28:16.278 "num_base_bdevs_operational": 4, 00:28:16.278 "base_bdevs_list": [ 00:28:16.278 { 00:28:16.278 "name": "BaseBdev1", 00:28:16.278 "uuid": "bbe4bed8-2eb4-441b-9fc8-4d6dde99633b", 00:28:16.278 "is_configured": true, 00:28:16.278 "data_offset": 2048, 00:28:16.278 "data_size": 63488 00:28:16.278 }, 00:28:16.278 { 00:28:16.278 "name": "BaseBdev2", 00:28:16.278 "uuid": "295943f6-af51-436e-be20-a376ae2643dc", 00:28:16.278 "is_configured": true, 00:28:16.278 "data_offset": 2048, 00:28:16.278 "data_size": 63488 00:28:16.278 }, 00:28:16.278 { 00:28:16.278 "name": "BaseBdev3", 00:28:16.278 "uuid": "27424149-e136-4d75-ae8c-3ab72bcc77a2", 00:28:16.278 "is_configured": true, 00:28:16.278 "data_offset": 2048, 00:28:16.278 "data_size": 63488 00:28:16.278 }, 00:28:16.278 { 00:28:16.278 "name": "BaseBdev4", 00:28:16.278 "uuid": "9b812564-1c29-49a2-bd08-100d7b19050e", 00:28:16.278 "is_configured": true, 00:28:16.278 "data_offset": 2048, 00:28:16.278 "data_size": 63488 00:28:16.278 } 00:28:16.278 ] 00:28:16.278 }' 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:16.278 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.847 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:16.847 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:16.847 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:16.847 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:16.847 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:28:16.847 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:16.847 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:16.848 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:16.848 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.848 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.848 [2024-10-28 13:39:30.877531] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:16.848 13:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.848 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:16.848 "name": "Existed_Raid", 00:28:16.848 "aliases": [ 00:28:16.848 "9e30e5c6-ada0-4c4c-9eae-e5593108622d" 00:28:16.848 ], 00:28:16.848 "product_name": "Raid Volume", 00:28:16.848 "block_size": 512, 00:28:16.848 "num_blocks": 63488, 00:28:16.848 "uuid": "9e30e5c6-ada0-4c4c-9eae-e5593108622d", 00:28:16.848 "assigned_rate_limits": { 00:28:16.848 "rw_ios_per_sec": 0, 00:28:16.848 "rw_mbytes_per_sec": 0, 00:28:16.848 "r_mbytes_per_sec": 0, 00:28:16.848 "w_mbytes_per_sec": 0 00:28:16.848 }, 00:28:16.848 "claimed": false, 00:28:16.848 "zoned": false, 00:28:16.848 "supported_io_types": { 00:28:16.848 "read": true, 00:28:16.848 "write": true, 00:28:16.848 "unmap": false, 00:28:16.848 "flush": false, 00:28:16.848 "reset": true, 00:28:16.848 "nvme_admin": false, 00:28:16.848 "nvme_io": false, 00:28:16.848 "nvme_io_md": false, 00:28:16.848 "write_zeroes": true, 00:28:16.848 "zcopy": false, 00:28:16.848 "get_zone_info": false, 00:28:16.848 "zone_management": false, 00:28:16.848 "zone_append": false, 00:28:16.848 "compare": false, 00:28:16.848 "compare_and_write": false, 00:28:16.848 "abort": false, 00:28:16.848 "seek_hole": false, 00:28:16.848 "seek_data": false, 00:28:16.848 "copy": false, 00:28:16.848 "nvme_iov_md": false 00:28:16.848 }, 00:28:16.848 "memory_domains": [ 00:28:16.848 { 00:28:16.848 "dma_device_id": "system", 00:28:16.848 "dma_device_type": 1 00:28:16.848 }, 00:28:16.848 { 00:28:16.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.848 "dma_device_type": 2 00:28:16.848 }, 00:28:16.848 { 00:28:16.848 "dma_device_id": "system", 00:28:16.848 "dma_device_type": 1 00:28:16.848 }, 00:28:16.848 { 00:28:16.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.848 "dma_device_type": 2 00:28:16.848 }, 00:28:16.848 { 00:28:16.848 "dma_device_id": "system", 00:28:16.848 "dma_device_type": 1 00:28:16.848 }, 00:28:16.848 { 00:28:16.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.848 "dma_device_type": 2 00:28:16.848 }, 00:28:16.848 { 00:28:16.848 "dma_device_id": "system", 00:28:16.848 "dma_device_type": 1 00:28:16.848 }, 00:28:16.848 { 00:28:16.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.848 "dma_device_type": 2 00:28:16.848 } 00:28:16.848 ], 00:28:16.848 "driver_specific": { 00:28:16.848 "raid": { 00:28:16.848 "uuid": "9e30e5c6-ada0-4c4c-9eae-e5593108622d", 00:28:16.848 "strip_size_kb": 0, 00:28:16.848 "state": "online", 00:28:16.848 "raid_level": "raid1", 00:28:16.848 "superblock": true, 00:28:16.848 "num_base_bdevs": 4, 00:28:16.848 "num_base_bdevs_discovered": 4, 00:28:16.848 "num_base_bdevs_operational": 4, 00:28:16.848 "base_bdevs_list": [ 00:28:16.848 { 00:28:16.848 "name": "BaseBdev1", 00:28:16.848 "uuid": "bbe4bed8-2eb4-441b-9fc8-4d6dde99633b", 00:28:16.848 "is_configured": true, 00:28:16.848 "data_offset": 2048, 00:28:16.848 "data_size": 63488 00:28:16.848 }, 00:28:16.848 { 00:28:16.848 "name": "BaseBdev2", 00:28:16.848 "uuid": "295943f6-af51-436e-be20-a376ae2643dc", 00:28:16.848 "is_configured": true, 00:28:16.848 "data_offset": 2048, 00:28:16.848 "data_size": 63488 00:28:16.848 }, 00:28:16.848 { 00:28:16.848 "name": "BaseBdev3", 00:28:16.848 "uuid": "27424149-e136-4d75-ae8c-3ab72bcc77a2", 00:28:16.848 "is_configured": true, 00:28:16.848 "data_offset": 2048, 00:28:16.848 "data_size": 63488 00:28:16.848 }, 00:28:16.848 { 00:28:16.848 "name": "BaseBdev4", 00:28:16.848 "uuid": "9b812564-1c29-49a2-bd08-100d7b19050e", 00:28:16.848 "is_configured": true, 00:28:16.848 "data_offset": 2048, 00:28:16.848 "data_size": 63488 00:28:16.848 } 00:28:16.848 ] 00:28:16.848 } 00:28:16.848 } 00:28:16.848 }' 00:28:16.848 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:16.848 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:16.848 BaseBdev2 00:28:16.848 BaseBdev3 00:28:16.848 BaseBdev4' 00:28:16.848 13:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.107 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.366 [2024-10-28 13:39:31.273377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:17.366 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:17.367 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:17.367 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:17.367 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.367 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.367 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.367 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.367 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:17.367 "name": "Existed_Raid", 00:28:17.367 "uuid": "9e30e5c6-ada0-4c4c-9eae-e5593108622d", 00:28:17.367 "strip_size_kb": 0, 00:28:17.367 "state": "online", 00:28:17.367 "raid_level": "raid1", 00:28:17.367 "superblock": true, 00:28:17.367 "num_base_bdevs": 4, 00:28:17.367 "num_base_bdevs_discovered": 3, 00:28:17.367 "num_base_bdevs_operational": 3, 00:28:17.367 "base_bdevs_list": [ 00:28:17.367 { 00:28:17.367 "name": null, 00:28:17.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.367 "is_configured": false, 00:28:17.367 "data_offset": 0, 00:28:17.367 "data_size": 63488 00:28:17.367 }, 00:28:17.367 { 00:28:17.367 "name": "BaseBdev2", 00:28:17.367 "uuid": "295943f6-af51-436e-be20-a376ae2643dc", 00:28:17.367 "is_configured": true, 00:28:17.367 "data_offset": 2048, 00:28:17.367 "data_size": 63488 00:28:17.367 }, 00:28:17.367 { 00:28:17.367 "name": "BaseBdev3", 00:28:17.367 "uuid": "27424149-e136-4d75-ae8c-3ab72bcc77a2", 00:28:17.367 "is_configured": true, 00:28:17.367 "data_offset": 2048, 00:28:17.367 "data_size": 63488 00:28:17.367 }, 00:28:17.367 { 00:28:17.367 "name": "BaseBdev4", 00:28:17.367 "uuid": "9b812564-1c29-49a2-bd08-100d7b19050e", 00:28:17.367 "is_configured": true, 00:28:17.367 "data_offset": 2048, 00:28:17.367 "data_size": 63488 00:28:17.367 } 00:28:17.367 ] 00:28:17.367 }' 00:28:17.367 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:17.367 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.935 [2024-10-28 13:39:31.891363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.935 13:39:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.935 [2024-10-28 13:39:31.983935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:17.935 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.935 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:17.935 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:17.935 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.935 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:17.935 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.935 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.935 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.935 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:17.935 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:17.935 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:28:17.935 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.935 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.935 [2024-10-28 13:39:32.075168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:17.935 [2024-10-28 13:39:32.075358] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:18.194 [2024-10-28 13:39:32.099914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:18.194 [2024-10-28 13:39:32.099995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:18.194 [2024-10-28 13:39:32.100014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.194 BaseBdev2 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.194 [ 00:28:18.194 { 00:28:18.194 "name": "BaseBdev2", 00:28:18.194 "aliases": [ 00:28:18.194 "3ad31e90-d824-4274-af05-689f0ac1ddc5" 00:28:18.194 ], 00:28:18.194 "product_name": "Malloc disk", 00:28:18.194 "block_size": 512, 00:28:18.194 "num_blocks": 65536, 00:28:18.194 "uuid": "3ad31e90-d824-4274-af05-689f0ac1ddc5", 00:28:18.194 "assigned_rate_limits": { 00:28:18.194 "rw_ios_per_sec": 0, 00:28:18.194 "rw_mbytes_per_sec": 0, 00:28:18.194 "r_mbytes_per_sec": 0, 00:28:18.194 "w_mbytes_per_sec": 0 00:28:18.194 }, 00:28:18.194 "claimed": false, 00:28:18.194 "zoned": false, 00:28:18.194 "supported_io_types": { 00:28:18.194 "read": true, 00:28:18.194 "write": true, 00:28:18.194 "unmap": true, 00:28:18.194 "flush": true, 00:28:18.194 "reset": true, 00:28:18.194 "nvme_admin": false, 00:28:18.194 "nvme_io": false, 00:28:18.194 "nvme_io_md": false, 00:28:18.194 "write_zeroes": true, 00:28:18.194 "zcopy": true, 00:28:18.194 "get_zone_info": false, 00:28:18.194 "zone_management": false, 00:28:18.194 "zone_append": false, 00:28:18.194 "compare": false, 00:28:18.194 "compare_and_write": false, 00:28:18.194 "abort": true, 00:28:18.194 "seek_hole": false, 00:28:18.194 "seek_data": false, 00:28:18.194 "copy": true, 00:28:18.194 "nvme_iov_md": false 00:28:18.194 }, 00:28:18.194 "memory_domains": [ 00:28:18.194 { 00:28:18.194 "dma_device_id": "system", 00:28:18.194 "dma_device_type": 1 00:28:18.194 }, 00:28:18.194 { 00:28:18.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.194 "dma_device_type": 2 00:28:18.194 } 00:28:18.194 ], 00:28:18.194 "driver_specific": {} 00:28:18.194 } 00:28:18.194 ] 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.194 BaseBdev3 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:18.194 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.195 [ 00:28:18.195 { 00:28:18.195 "name": "BaseBdev3", 00:28:18.195 "aliases": [ 00:28:18.195 "36eb11d7-a95a-418d-86b1-53c7061adce4" 00:28:18.195 ], 00:28:18.195 "product_name": "Malloc disk", 00:28:18.195 "block_size": 512, 00:28:18.195 "num_blocks": 65536, 00:28:18.195 "uuid": "36eb11d7-a95a-418d-86b1-53c7061adce4", 00:28:18.195 "assigned_rate_limits": { 00:28:18.195 "rw_ios_per_sec": 0, 00:28:18.195 "rw_mbytes_per_sec": 0, 00:28:18.195 "r_mbytes_per_sec": 0, 00:28:18.195 "w_mbytes_per_sec": 0 00:28:18.195 }, 00:28:18.195 "claimed": false, 00:28:18.195 "zoned": false, 00:28:18.195 "supported_io_types": { 00:28:18.195 "read": true, 00:28:18.195 "write": true, 00:28:18.195 "unmap": true, 00:28:18.195 "flush": true, 00:28:18.195 "reset": true, 00:28:18.195 "nvme_admin": false, 00:28:18.195 "nvme_io": false, 00:28:18.195 "nvme_io_md": false, 00:28:18.195 "write_zeroes": true, 00:28:18.195 "zcopy": true, 00:28:18.195 "get_zone_info": false, 00:28:18.195 "zone_management": false, 00:28:18.195 "zone_append": false, 00:28:18.195 "compare": false, 00:28:18.195 "compare_and_write": false, 00:28:18.195 "abort": true, 00:28:18.195 "seek_hole": false, 00:28:18.195 "seek_data": false, 00:28:18.195 "copy": true, 00:28:18.195 "nvme_iov_md": false 00:28:18.195 }, 00:28:18.195 "memory_domains": [ 00:28:18.195 { 00:28:18.195 "dma_device_id": "system", 00:28:18.195 "dma_device_type": 1 00:28:18.195 }, 00:28:18.195 { 00:28:18.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.195 "dma_device_type": 2 00:28:18.195 } 00:28:18.195 ], 00:28:18.195 "driver_specific": {} 00:28:18.195 } 00:28:18.195 ] 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.195 BaseBdev4 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.195 [ 00:28:18.195 { 00:28:18.195 "name": "BaseBdev4", 00:28:18.195 "aliases": [ 00:28:18.195 "1a2530de-19a2-4c63-8fce-073b349a9f59" 00:28:18.195 ], 00:28:18.195 "product_name": "Malloc disk", 00:28:18.195 "block_size": 512, 00:28:18.195 "num_blocks": 65536, 00:28:18.195 "uuid": "1a2530de-19a2-4c63-8fce-073b349a9f59", 00:28:18.195 "assigned_rate_limits": { 00:28:18.195 "rw_ios_per_sec": 0, 00:28:18.195 "rw_mbytes_per_sec": 0, 00:28:18.195 "r_mbytes_per_sec": 0, 00:28:18.195 "w_mbytes_per_sec": 0 00:28:18.195 }, 00:28:18.195 "claimed": false, 00:28:18.195 "zoned": false, 00:28:18.195 "supported_io_types": { 00:28:18.195 "read": true, 00:28:18.195 "write": true, 00:28:18.195 "unmap": true, 00:28:18.195 "flush": true, 00:28:18.195 "reset": true, 00:28:18.195 "nvme_admin": false, 00:28:18.195 "nvme_io": false, 00:28:18.195 "nvme_io_md": false, 00:28:18.195 "write_zeroes": true, 00:28:18.195 "zcopy": true, 00:28:18.195 "get_zone_info": false, 00:28:18.195 "zone_management": false, 00:28:18.195 "zone_append": false, 00:28:18.195 "compare": false, 00:28:18.195 "compare_and_write": false, 00:28:18.195 "abort": true, 00:28:18.195 "seek_hole": false, 00:28:18.195 "seek_data": false, 00:28:18.195 "copy": true, 00:28:18.195 "nvme_iov_md": false 00:28:18.195 }, 00:28:18.195 "memory_domains": [ 00:28:18.195 { 00:28:18.195 "dma_device_id": "system", 00:28:18.195 "dma_device_type": 1 00:28:18.195 }, 00:28:18.195 { 00:28:18.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:18.195 "dma_device_type": 2 00:28:18.195 } 00:28:18.195 ], 00:28:18.195 "driver_specific": {} 00:28:18.195 } 00:28:18.195 ] 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.195 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.454 [2024-10-28 13:39:32.354569] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:18.454 [2024-10-28 13:39:32.354668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:18.454 [2024-10-28 13:39:32.354716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:18.454 [2024-10-28 13:39:32.357750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:18.454 [2024-10-28 13:39:32.357844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:18.454 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.454 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:18.454 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:18.454 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:18.454 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:18.454 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:18.454 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:18.454 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:18.454 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:18.454 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:18.454 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:18.454 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:18.454 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:18.454 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.455 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.455 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.455 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:18.455 "name": "Existed_Raid", 00:28:18.455 "uuid": "a345308b-6925-4b98-a4b9-7a22a029af80", 00:28:18.455 "strip_size_kb": 0, 00:28:18.455 "state": "configuring", 00:28:18.455 "raid_level": "raid1", 00:28:18.455 "superblock": true, 00:28:18.455 "num_base_bdevs": 4, 00:28:18.455 "num_base_bdevs_discovered": 3, 00:28:18.455 "num_base_bdevs_operational": 4, 00:28:18.455 "base_bdevs_list": [ 00:28:18.455 { 00:28:18.455 "name": "BaseBdev1", 00:28:18.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:18.455 "is_configured": false, 00:28:18.455 "data_offset": 0, 00:28:18.455 "data_size": 0 00:28:18.455 }, 00:28:18.455 { 00:28:18.455 "name": "BaseBdev2", 00:28:18.455 "uuid": "3ad31e90-d824-4274-af05-689f0ac1ddc5", 00:28:18.455 "is_configured": true, 00:28:18.455 "data_offset": 2048, 00:28:18.455 "data_size": 63488 00:28:18.455 }, 00:28:18.455 { 00:28:18.455 "name": "BaseBdev3", 00:28:18.455 "uuid": "36eb11d7-a95a-418d-86b1-53c7061adce4", 00:28:18.455 "is_configured": true, 00:28:18.455 "data_offset": 2048, 00:28:18.455 "data_size": 63488 00:28:18.455 }, 00:28:18.455 { 00:28:18.455 "name": "BaseBdev4", 00:28:18.455 "uuid": "1a2530de-19a2-4c63-8fce-073b349a9f59", 00:28:18.455 "is_configured": true, 00:28:18.455 "data_offset": 2048, 00:28:18.455 "data_size": 63488 00:28:18.455 } 00:28:18.455 ] 00:28:18.455 }' 00:28:18.455 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:18.455 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.021 [2024-10-28 13:39:32.902589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.021 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:19.021 "name": "Existed_Raid", 00:28:19.021 "uuid": "a345308b-6925-4b98-a4b9-7a22a029af80", 00:28:19.021 "strip_size_kb": 0, 00:28:19.021 "state": "configuring", 00:28:19.021 "raid_level": "raid1", 00:28:19.021 "superblock": true, 00:28:19.021 "num_base_bdevs": 4, 00:28:19.021 "num_base_bdevs_discovered": 2, 00:28:19.021 "num_base_bdevs_operational": 4, 00:28:19.021 "base_bdevs_list": [ 00:28:19.021 { 00:28:19.021 "name": "BaseBdev1", 00:28:19.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.021 "is_configured": false, 00:28:19.021 "data_offset": 0, 00:28:19.021 "data_size": 0 00:28:19.021 }, 00:28:19.021 { 00:28:19.021 "name": null, 00:28:19.021 "uuid": "3ad31e90-d824-4274-af05-689f0ac1ddc5", 00:28:19.021 "is_configured": false, 00:28:19.021 "data_offset": 0, 00:28:19.021 "data_size": 63488 00:28:19.021 }, 00:28:19.021 { 00:28:19.021 "name": "BaseBdev3", 00:28:19.021 "uuid": "36eb11d7-a95a-418d-86b1-53c7061adce4", 00:28:19.021 "is_configured": true, 00:28:19.021 "data_offset": 2048, 00:28:19.021 "data_size": 63488 00:28:19.021 }, 00:28:19.022 { 00:28:19.022 "name": "BaseBdev4", 00:28:19.022 "uuid": "1a2530de-19a2-4c63-8fce-073b349a9f59", 00:28:19.022 "is_configured": true, 00:28:19.022 "data_offset": 2048, 00:28:19.022 "data_size": 63488 00:28:19.022 } 00:28:19.022 ] 00:28:19.022 }' 00:28:19.022 13:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:19.022 13:39:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.589 [2024-10-28 13:39:33.508222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:19.589 BaseBdev1 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.589 [ 00:28:19.589 { 00:28:19.589 "name": "BaseBdev1", 00:28:19.589 "aliases": [ 00:28:19.589 "46d43568-e01e-46ad-bb00-bcb1dadeacbc" 00:28:19.589 ], 00:28:19.589 "product_name": "Malloc disk", 00:28:19.589 "block_size": 512, 00:28:19.589 "num_blocks": 65536, 00:28:19.589 "uuid": "46d43568-e01e-46ad-bb00-bcb1dadeacbc", 00:28:19.589 "assigned_rate_limits": { 00:28:19.589 "rw_ios_per_sec": 0, 00:28:19.589 "rw_mbytes_per_sec": 0, 00:28:19.589 "r_mbytes_per_sec": 0, 00:28:19.589 "w_mbytes_per_sec": 0 00:28:19.589 }, 00:28:19.589 "claimed": true, 00:28:19.589 "claim_type": "exclusive_write", 00:28:19.589 "zoned": false, 00:28:19.589 "supported_io_types": { 00:28:19.589 "read": true, 00:28:19.589 "write": true, 00:28:19.589 "unmap": true, 00:28:19.589 "flush": true, 00:28:19.589 "reset": true, 00:28:19.589 "nvme_admin": false, 00:28:19.589 "nvme_io": false, 00:28:19.589 "nvme_io_md": false, 00:28:19.589 "write_zeroes": true, 00:28:19.589 "zcopy": true, 00:28:19.589 "get_zone_info": false, 00:28:19.589 "zone_management": false, 00:28:19.589 "zone_append": false, 00:28:19.589 "compare": false, 00:28:19.589 "compare_and_write": false, 00:28:19.589 "abort": true, 00:28:19.589 "seek_hole": false, 00:28:19.589 "seek_data": false, 00:28:19.589 "copy": true, 00:28:19.589 "nvme_iov_md": false 00:28:19.589 }, 00:28:19.589 "memory_domains": [ 00:28:19.589 { 00:28:19.589 "dma_device_id": "system", 00:28:19.589 "dma_device_type": 1 00:28:19.589 }, 00:28:19.589 { 00:28:19.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:19.589 "dma_device_type": 2 00:28:19.589 } 00:28:19.589 ], 00:28:19.589 "driver_specific": {} 00:28:19.589 } 00:28:19.589 ] 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.589 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:19.589 "name": "Existed_Raid", 00:28:19.589 "uuid": "a345308b-6925-4b98-a4b9-7a22a029af80", 00:28:19.589 "strip_size_kb": 0, 00:28:19.589 "state": "configuring", 00:28:19.589 "raid_level": "raid1", 00:28:19.589 "superblock": true, 00:28:19.589 "num_base_bdevs": 4, 00:28:19.589 "num_base_bdevs_discovered": 3, 00:28:19.589 "num_base_bdevs_operational": 4, 00:28:19.589 "base_bdevs_list": [ 00:28:19.589 { 00:28:19.589 "name": "BaseBdev1", 00:28:19.589 "uuid": "46d43568-e01e-46ad-bb00-bcb1dadeacbc", 00:28:19.589 "is_configured": true, 00:28:19.589 "data_offset": 2048, 00:28:19.589 "data_size": 63488 00:28:19.590 }, 00:28:19.590 { 00:28:19.590 "name": null, 00:28:19.590 "uuid": "3ad31e90-d824-4274-af05-689f0ac1ddc5", 00:28:19.590 "is_configured": false, 00:28:19.590 "data_offset": 0, 00:28:19.590 "data_size": 63488 00:28:19.590 }, 00:28:19.590 { 00:28:19.590 "name": "BaseBdev3", 00:28:19.590 "uuid": "36eb11d7-a95a-418d-86b1-53c7061adce4", 00:28:19.590 "is_configured": true, 00:28:19.590 "data_offset": 2048, 00:28:19.590 "data_size": 63488 00:28:19.590 }, 00:28:19.590 { 00:28:19.590 "name": "BaseBdev4", 00:28:19.590 "uuid": "1a2530de-19a2-4c63-8fce-073b349a9f59", 00:28:19.590 "is_configured": true, 00:28:19.590 "data_offset": 2048, 00:28:19.590 "data_size": 63488 00:28:19.590 } 00:28:19.590 ] 00:28:19.590 }' 00:28:19.590 13:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:19.590 13:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.156 [2024-10-28 13:39:34.144529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:20.156 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:20.157 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:20.157 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:20.157 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.157 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:20.157 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.157 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.157 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:20.157 "name": "Existed_Raid", 00:28:20.157 "uuid": "a345308b-6925-4b98-a4b9-7a22a029af80", 00:28:20.157 "strip_size_kb": 0, 00:28:20.157 "state": "configuring", 00:28:20.157 "raid_level": "raid1", 00:28:20.157 "superblock": true, 00:28:20.157 "num_base_bdevs": 4, 00:28:20.157 "num_base_bdevs_discovered": 2, 00:28:20.157 "num_base_bdevs_operational": 4, 00:28:20.157 "base_bdevs_list": [ 00:28:20.157 { 00:28:20.157 "name": "BaseBdev1", 00:28:20.157 "uuid": "46d43568-e01e-46ad-bb00-bcb1dadeacbc", 00:28:20.157 "is_configured": true, 00:28:20.157 "data_offset": 2048, 00:28:20.157 "data_size": 63488 00:28:20.157 }, 00:28:20.157 { 00:28:20.157 "name": null, 00:28:20.157 "uuid": "3ad31e90-d824-4274-af05-689f0ac1ddc5", 00:28:20.157 "is_configured": false, 00:28:20.157 "data_offset": 0, 00:28:20.157 "data_size": 63488 00:28:20.157 }, 00:28:20.157 { 00:28:20.157 "name": null, 00:28:20.157 "uuid": "36eb11d7-a95a-418d-86b1-53c7061adce4", 00:28:20.157 "is_configured": false, 00:28:20.157 "data_offset": 0, 00:28:20.157 "data_size": 63488 00:28:20.157 }, 00:28:20.157 { 00:28:20.157 "name": "BaseBdev4", 00:28:20.157 "uuid": "1a2530de-19a2-4c63-8fce-073b349a9f59", 00:28:20.157 "is_configured": true, 00:28:20.157 "data_offset": 2048, 00:28:20.157 "data_size": 63488 00:28:20.157 } 00:28:20.157 ] 00:28:20.157 }' 00:28:20.157 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:20.157 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.724 [2024-10-28 13:39:34.672946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:20.724 "name": "Existed_Raid", 00:28:20.724 "uuid": "a345308b-6925-4b98-a4b9-7a22a029af80", 00:28:20.724 "strip_size_kb": 0, 00:28:20.724 "state": "configuring", 00:28:20.724 "raid_level": "raid1", 00:28:20.724 "superblock": true, 00:28:20.724 "num_base_bdevs": 4, 00:28:20.724 "num_base_bdevs_discovered": 3, 00:28:20.724 "num_base_bdevs_operational": 4, 00:28:20.724 "base_bdevs_list": [ 00:28:20.724 { 00:28:20.724 "name": "BaseBdev1", 00:28:20.724 "uuid": "46d43568-e01e-46ad-bb00-bcb1dadeacbc", 00:28:20.724 "is_configured": true, 00:28:20.724 "data_offset": 2048, 00:28:20.724 "data_size": 63488 00:28:20.724 }, 00:28:20.724 { 00:28:20.724 "name": null, 00:28:20.724 "uuid": "3ad31e90-d824-4274-af05-689f0ac1ddc5", 00:28:20.724 "is_configured": false, 00:28:20.724 "data_offset": 0, 00:28:20.724 "data_size": 63488 00:28:20.724 }, 00:28:20.724 { 00:28:20.724 "name": "BaseBdev3", 00:28:20.724 "uuid": "36eb11d7-a95a-418d-86b1-53c7061adce4", 00:28:20.724 "is_configured": true, 00:28:20.724 "data_offset": 2048, 00:28:20.724 "data_size": 63488 00:28:20.724 }, 00:28:20.724 { 00:28:20.724 "name": "BaseBdev4", 00:28:20.724 "uuid": "1a2530de-19a2-4c63-8fce-073b349a9f59", 00:28:20.724 "is_configured": true, 00:28:20.724 "data_offset": 2048, 00:28:20.724 "data_size": 63488 00:28:20.724 } 00:28:20.724 ] 00:28:20.724 }' 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:20.724 13:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.290 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.291 [2024-10-28 13:39:35.245105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:21.291 "name": "Existed_Raid", 00:28:21.291 "uuid": "a345308b-6925-4b98-a4b9-7a22a029af80", 00:28:21.291 "strip_size_kb": 0, 00:28:21.291 "state": "configuring", 00:28:21.291 "raid_level": "raid1", 00:28:21.291 "superblock": true, 00:28:21.291 "num_base_bdevs": 4, 00:28:21.291 "num_base_bdevs_discovered": 2, 00:28:21.291 "num_base_bdevs_operational": 4, 00:28:21.291 "base_bdevs_list": [ 00:28:21.291 { 00:28:21.291 "name": null, 00:28:21.291 "uuid": "46d43568-e01e-46ad-bb00-bcb1dadeacbc", 00:28:21.291 "is_configured": false, 00:28:21.291 "data_offset": 0, 00:28:21.291 "data_size": 63488 00:28:21.291 }, 00:28:21.291 { 00:28:21.291 "name": null, 00:28:21.291 "uuid": "3ad31e90-d824-4274-af05-689f0ac1ddc5", 00:28:21.291 "is_configured": false, 00:28:21.291 "data_offset": 0, 00:28:21.291 "data_size": 63488 00:28:21.291 }, 00:28:21.291 { 00:28:21.291 "name": "BaseBdev3", 00:28:21.291 "uuid": "36eb11d7-a95a-418d-86b1-53c7061adce4", 00:28:21.291 "is_configured": true, 00:28:21.291 "data_offset": 2048, 00:28:21.291 "data_size": 63488 00:28:21.291 }, 00:28:21.291 { 00:28:21.291 "name": "BaseBdev4", 00:28:21.291 "uuid": "1a2530de-19a2-4c63-8fce-073b349a9f59", 00:28:21.291 "is_configured": true, 00:28:21.291 "data_offset": 2048, 00:28:21.291 "data_size": 63488 00:28:21.291 } 00:28:21.291 ] 00:28:21.291 }' 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:21.291 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.857 [2024-10-28 13:39:35.879214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:21.857 "name": "Existed_Raid", 00:28:21.857 "uuid": "a345308b-6925-4b98-a4b9-7a22a029af80", 00:28:21.857 "strip_size_kb": 0, 00:28:21.857 "state": "configuring", 00:28:21.857 "raid_level": "raid1", 00:28:21.857 "superblock": true, 00:28:21.857 "num_base_bdevs": 4, 00:28:21.857 "num_base_bdevs_discovered": 3, 00:28:21.857 "num_base_bdevs_operational": 4, 00:28:21.857 "base_bdevs_list": [ 00:28:21.857 { 00:28:21.857 "name": null, 00:28:21.857 "uuid": "46d43568-e01e-46ad-bb00-bcb1dadeacbc", 00:28:21.857 "is_configured": false, 00:28:21.857 "data_offset": 0, 00:28:21.857 "data_size": 63488 00:28:21.857 }, 00:28:21.857 { 00:28:21.857 "name": "BaseBdev2", 00:28:21.857 "uuid": "3ad31e90-d824-4274-af05-689f0ac1ddc5", 00:28:21.857 "is_configured": true, 00:28:21.857 "data_offset": 2048, 00:28:21.857 "data_size": 63488 00:28:21.857 }, 00:28:21.857 { 00:28:21.857 "name": "BaseBdev3", 00:28:21.857 "uuid": "36eb11d7-a95a-418d-86b1-53c7061adce4", 00:28:21.857 "is_configured": true, 00:28:21.857 "data_offset": 2048, 00:28:21.857 "data_size": 63488 00:28:21.857 }, 00:28:21.857 { 00:28:21.857 "name": "BaseBdev4", 00:28:21.857 "uuid": "1a2530de-19a2-4c63-8fce-073b349a9f59", 00:28:21.857 "is_configured": true, 00:28:21.857 "data_offset": 2048, 00:28:21.857 "data_size": 63488 00:28:21.857 } 00:28:21.857 ] 00:28:21.857 }' 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:21.857 13:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 46d43568-e01e-46ad-bb00-bcb1dadeacbc 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.425 NewBaseBdev 00:28:22.425 [2024-10-28 13:39:36.544627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:22.425 [2024-10-28 13:39:36.544892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:22.425 [2024-10-28 13:39:36.544911] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:22.425 [2024-10-28 13:39:36.545263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:28:22.425 [2024-10-28 13:39:36.545443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:22.425 [2024-10-28 13:39:36.545464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:22.425 [2024-10-28 13:39:36.545590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.425 [ 00:28:22.425 { 00:28:22.425 "name": "NewBaseBdev", 00:28:22.425 "aliases": [ 00:28:22.425 "46d43568-e01e-46ad-bb00-bcb1dadeacbc" 00:28:22.425 ], 00:28:22.425 "product_name": "Malloc disk", 00:28:22.425 "block_size": 512, 00:28:22.425 "num_blocks": 65536, 00:28:22.425 "uuid": "46d43568-e01e-46ad-bb00-bcb1dadeacbc", 00:28:22.425 "assigned_rate_limits": { 00:28:22.425 "rw_ios_per_sec": 0, 00:28:22.425 "rw_mbytes_per_sec": 0, 00:28:22.425 "r_mbytes_per_sec": 0, 00:28:22.425 "w_mbytes_per_sec": 0 00:28:22.425 }, 00:28:22.425 "claimed": true, 00:28:22.425 "claim_type": "exclusive_write", 00:28:22.425 "zoned": false, 00:28:22.425 "supported_io_types": { 00:28:22.425 "read": true, 00:28:22.425 "write": true, 00:28:22.425 "unmap": true, 00:28:22.425 "flush": true, 00:28:22.425 "reset": true, 00:28:22.425 "nvme_admin": false, 00:28:22.425 "nvme_io": false, 00:28:22.425 "nvme_io_md": false, 00:28:22.425 "write_zeroes": true, 00:28:22.425 "zcopy": true, 00:28:22.425 "get_zone_info": false, 00:28:22.425 "zone_management": false, 00:28:22.425 "zone_append": false, 00:28:22.425 "compare": false, 00:28:22.425 "compare_and_write": false, 00:28:22.425 "abort": true, 00:28:22.425 "seek_hole": false, 00:28:22.425 "seek_data": false, 00:28:22.425 "copy": true, 00:28:22.425 "nvme_iov_md": false 00:28:22.425 }, 00:28:22.425 "memory_domains": [ 00:28:22.425 { 00:28:22.425 "dma_device_id": "system", 00:28:22.425 "dma_device_type": 1 00:28:22.425 }, 00:28:22.425 { 00:28:22.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:22.425 "dma_device_type": 2 00:28:22.425 } 00:28:22.425 ], 00:28:22.425 "driver_specific": {} 00:28:22.425 } 00:28:22.425 ] 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:22.425 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:22.426 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:22.426 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.426 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.426 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:22.684 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.684 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:22.684 "name": "Existed_Raid", 00:28:22.684 "uuid": "a345308b-6925-4b98-a4b9-7a22a029af80", 00:28:22.684 "strip_size_kb": 0, 00:28:22.684 "state": "online", 00:28:22.684 "raid_level": "raid1", 00:28:22.684 "superblock": true, 00:28:22.684 "num_base_bdevs": 4, 00:28:22.684 "num_base_bdevs_discovered": 4, 00:28:22.684 "num_base_bdevs_operational": 4, 00:28:22.684 "base_bdevs_list": [ 00:28:22.684 { 00:28:22.684 "name": "NewBaseBdev", 00:28:22.684 "uuid": "46d43568-e01e-46ad-bb00-bcb1dadeacbc", 00:28:22.684 "is_configured": true, 00:28:22.684 "data_offset": 2048, 00:28:22.684 "data_size": 63488 00:28:22.684 }, 00:28:22.684 { 00:28:22.684 "name": "BaseBdev2", 00:28:22.684 "uuid": "3ad31e90-d824-4274-af05-689f0ac1ddc5", 00:28:22.684 "is_configured": true, 00:28:22.684 "data_offset": 2048, 00:28:22.684 "data_size": 63488 00:28:22.684 }, 00:28:22.684 { 00:28:22.684 "name": "BaseBdev3", 00:28:22.684 "uuid": "36eb11d7-a95a-418d-86b1-53c7061adce4", 00:28:22.684 "is_configured": true, 00:28:22.684 "data_offset": 2048, 00:28:22.684 "data_size": 63488 00:28:22.684 }, 00:28:22.684 { 00:28:22.684 "name": "BaseBdev4", 00:28:22.684 "uuid": "1a2530de-19a2-4c63-8fce-073b349a9f59", 00:28:22.684 "is_configured": true, 00:28:22.684 "data_offset": 2048, 00:28:22.684 "data_size": 63488 00:28:22.684 } 00:28:22.684 ] 00:28:22.684 }' 00:28:22.684 13:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:22.684 13:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.265 [2024-10-28 13:39:37.113357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:23.265 "name": "Existed_Raid", 00:28:23.265 "aliases": [ 00:28:23.265 "a345308b-6925-4b98-a4b9-7a22a029af80" 00:28:23.265 ], 00:28:23.265 "product_name": "Raid Volume", 00:28:23.265 "block_size": 512, 00:28:23.265 "num_blocks": 63488, 00:28:23.265 "uuid": "a345308b-6925-4b98-a4b9-7a22a029af80", 00:28:23.265 "assigned_rate_limits": { 00:28:23.265 "rw_ios_per_sec": 0, 00:28:23.265 "rw_mbytes_per_sec": 0, 00:28:23.265 "r_mbytes_per_sec": 0, 00:28:23.265 "w_mbytes_per_sec": 0 00:28:23.265 }, 00:28:23.265 "claimed": false, 00:28:23.265 "zoned": false, 00:28:23.265 "supported_io_types": { 00:28:23.265 "read": true, 00:28:23.265 "write": true, 00:28:23.265 "unmap": false, 00:28:23.265 "flush": false, 00:28:23.265 "reset": true, 00:28:23.265 "nvme_admin": false, 00:28:23.265 "nvme_io": false, 00:28:23.265 "nvme_io_md": false, 00:28:23.265 "write_zeroes": true, 00:28:23.265 "zcopy": false, 00:28:23.265 "get_zone_info": false, 00:28:23.265 "zone_management": false, 00:28:23.265 "zone_append": false, 00:28:23.265 "compare": false, 00:28:23.265 "compare_and_write": false, 00:28:23.265 "abort": false, 00:28:23.265 "seek_hole": false, 00:28:23.265 "seek_data": false, 00:28:23.265 "copy": false, 00:28:23.265 "nvme_iov_md": false 00:28:23.265 }, 00:28:23.265 "memory_domains": [ 00:28:23.265 { 00:28:23.265 "dma_device_id": "system", 00:28:23.265 "dma_device_type": 1 00:28:23.265 }, 00:28:23.265 { 00:28:23.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:23.265 "dma_device_type": 2 00:28:23.265 }, 00:28:23.265 { 00:28:23.265 "dma_device_id": "system", 00:28:23.265 "dma_device_type": 1 00:28:23.265 }, 00:28:23.265 { 00:28:23.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:23.265 "dma_device_type": 2 00:28:23.265 }, 00:28:23.265 { 00:28:23.265 "dma_device_id": "system", 00:28:23.265 "dma_device_type": 1 00:28:23.265 }, 00:28:23.265 { 00:28:23.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:23.265 "dma_device_type": 2 00:28:23.265 }, 00:28:23.265 { 00:28:23.265 "dma_device_id": "system", 00:28:23.265 "dma_device_type": 1 00:28:23.265 }, 00:28:23.265 { 00:28:23.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:23.265 "dma_device_type": 2 00:28:23.265 } 00:28:23.265 ], 00:28:23.265 "driver_specific": { 00:28:23.265 "raid": { 00:28:23.265 "uuid": "a345308b-6925-4b98-a4b9-7a22a029af80", 00:28:23.265 "strip_size_kb": 0, 00:28:23.265 "state": "online", 00:28:23.265 "raid_level": "raid1", 00:28:23.265 "superblock": true, 00:28:23.265 "num_base_bdevs": 4, 00:28:23.265 "num_base_bdevs_discovered": 4, 00:28:23.265 "num_base_bdevs_operational": 4, 00:28:23.265 "base_bdevs_list": [ 00:28:23.265 { 00:28:23.265 "name": "NewBaseBdev", 00:28:23.265 "uuid": "46d43568-e01e-46ad-bb00-bcb1dadeacbc", 00:28:23.265 "is_configured": true, 00:28:23.265 "data_offset": 2048, 00:28:23.265 "data_size": 63488 00:28:23.265 }, 00:28:23.265 { 00:28:23.265 "name": "BaseBdev2", 00:28:23.265 "uuid": "3ad31e90-d824-4274-af05-689f0ac1ddc5", 00:28:23.265 "is_configured": true, 00:28:23.265 "data_offset": 2048, 00:28:23.265 "data_size": 63488 00:28:23.265 }, 00:28:23.265 { 00:28:23.265 "name": "BaseBdev3", 00:28:23.265 "uuid": "36eb11d7-a95a-418d-86b1-53c7061adce4", 00:28:23.265 "is_configured": true, 00:28:23.265 "data_offset": 2048, 00:28:23.265 "data_size": 63488 00:28:23.265 }, 00:28:23.265 { 00:28:23.265 "name": "BaseBdev4", 00:28:23.265 "uuid": "1a2530de-19a2-4c63-8fce-073b349a9f59", 00:28:23.265 "is_configured": true, 00:28:23.265 "data_offset": 2048, 00:28:23.265 "data_size": 63488 00:28:23.265 } 00:28:23.265 ] 00:28:23.265 } 00:28:23.265 } 00:28:23.265 }' 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:28:23.265 BaseBdev2 00:28:23.265 BaseBdev3 00:28:23.265 BaseBdev4' 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:23.265 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.524 [2024-10-28 13:39:37.452985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:23.524 [2024-10-28 13:39:37.453029] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:23.524 [2024-10-28 13:39:37.453164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:23.524 [2024-10-28 13:39:37.453528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:23.524 [2024-10-28 13:39:37.453555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 86578 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86578 ']' 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 86578 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86578 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:23.524 killing process with pid 86578 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86578' 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 86578 00:28:23.524 [2024-10-28 13:39:37.494520] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:23.524 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 86578 00:28:23.524 [2024-10-28 13:39:37.546422] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:23.782 13:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:28:23.782 00:28:23.782 real 0m11.656s 00:28:23.782 user 0m20.313s 00:28:23.782 sys 0m1.981s 00:28:23.782 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:23.782 13:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.782 ************************************ 00:28:23.782 END TEST raid_state_function_test_sb 00:28:23.782 ************************************ 00:28:23.782 13:39:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:28:23.782 13:39:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:23.782 13:39:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:23.782 13:39:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:23.782 ************************************ 00:28:23.782 START TEST raid_superblock_test 00:28:23.782 ************************************ 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=87248 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 87248 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 87248 ']' 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:23.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:23.782 13:39:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.041 [2024-10-28 13:39:37.955679] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:28:24.041 [2024-10-28 13:39:37.955864] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87248 ] 00:28:24.041 [2024-10-28 13:39:38.100376] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:24.041 [2024-10-28 13:39:38.130695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.041 [2024-10-28 13:39:38.184894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.299 [2024-10-28 13:39:38.241337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:24.299 [2024-10-28 13:39:38.241381] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:24.866 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.866 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:28:24.866 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:28:24.866 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:24.866 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:28:24.866 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:28:24.866 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:24.866 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:24.866 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:24.866 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:24.866 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:28:24.866 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.866 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.866 malloc1 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.867 [2024-10-28 13:39:38.897573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:24.867 [2024-10-28 13:39:38.897649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.867 [2024-10-28 13:39:38.897684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:24.867 [2024-10-28 13:39:38.897704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.867 [2024-10-28 13:39:38.900662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.867 [2024-10-28 13:39:38.900709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:24.867 pt1 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.867 malloc2 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.867 [2024-10-28 13:39:38.929414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:24.867 [2024-10-28 13:39:38.929508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.867 [2024-10-28 13:39:38.929543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:24.867 [2024-10-28 13:39:38.929558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.867 [2024-10-28 13:39:38.932614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.867 [2024-10-28 13:39:38.932669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:24.867 pt2 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.867 malloc3 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.867 [2024-10-28 13:39:38.961510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:24.867 [2024-10-28 13:39:38.961578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.867 [2024-10-28 13:39:38.961611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:24.867 [2024-10-28 13:39:38.961625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.867 [2024-10-28 13:39:38.964570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.867 [2024-10-28 13:39:38.964615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:24.867 pt3 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.867 malloc4 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.867 13:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.867 [2024-10-28 13:39:39.005232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:24.867 [2024-10-28 13:39:39.005307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.867 [2024-10-28 13:39:39.005343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:24.867 [2024-10-28 13:39:39.005359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.867 [2024-10-28 13:39:39.008330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.867 [2024-10-28 13:39:39.008377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:24.867 pt4 00:28:24.867 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.867 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:24.867 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:24.867 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:28:24.867 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.867 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.867 [2024-10-28 13:39:39.017350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:24.867 [2024-10-28 13:39:39.020031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:24.867 [2024-10-28 13:39:39.020169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:24.867 [2024-10-28 13:39:39.020245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:24.867 [2024-10-28 13:39:39.020495] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:28:24.867 [2024-10-28 13:39:39.020523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:24.867 [2024-10-28 13:39:39.020896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:28:24.867 [2024-10-28 13:39:39.021167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:28:24.867 [2024-10-28 13:39:39.021202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:28:24.867 [2024-10-28 13:39:39.021535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:24.867 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.867 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:24.867 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:24.867 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:24.867 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:24.867 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:24.867 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:24.867 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:25.125 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:25.125 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:25.125 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:25.125 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.125 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.125 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.125 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.125 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.125 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:25.125 "name": "raid_bdev1", 00:28:25.125 "uuid": "e15157e9-f95b-421e-b675-9b618c249fd6", 00:28:25.125 "strip_size_kb": 0, 00:28:25.125 "state": "online", 00:28:25.125 "raid_level": "raid1", 00:28:25.125 "superblock": true, 00:28:25.125 "num_base_bdevs": 4, 00:28:25.125 "num_base_bdevs_discovered": 4, 00:28:25.125 "num_base_bdevs_operational": 4, 00:28:25.125 "base_bdevs_list": [ 00:28:25.125 { 00:28:25.125 "name": "pt1", 00:28:25.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:25.126 "is_configured": true, 00:28:25.126 "data_offset": 2048, 00:28:25.126 "data_size": 63488 00:28:25.126 }, 00:28:25.126 { 00:28:25.126 "name": "pt2", 00:28:25.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:25.126 "is_configured": true, 00:28:25.126 "data_offset": 2048, 00:28:25.126 "data_size": 63488 00:28:25.126 }, 00:28:25.126 { 00:28:25.126 "name": "pt3", 00:28:25.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:25.126 "is_configured": true, 00:28:25.126 "data_offset": 2048, 00:28:25.126 "data_size": 63488 00:28:25.126 }, 00:28:25.126 { 00:28:25.126 "name": "pt4", 00:28:25.126 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:25.126 "is_configured": true, 00:28:25.126 "data_offset": 2048, 00:28:25.126 "data_size": 63488 00:28:25.126 } 00:28:25.126 ] 00:28:25.126 }' 00:28:25.126 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:25.126 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.383 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:28:25.383 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:25.383 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:25.383 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:25.383 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:25.383 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:25.383 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:25.383 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:25.383 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.383 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.383 [2024-10-28 13:39:39.521996] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:25.646 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.647 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:25.647 "name": "raid_bdev1", 00:28:25.647 "aliases": [ 00:28:25.647 "e15157e9-f95b-421e-b675-9b618c249fd6" 00:28:25.647 ], 00:28:25.647 "product_name": "Raid Volume", 00:28:25.647 "block_size": 512, 00:28:25.647 "num_blocks": 63488, 00:28:25.647 "uuid": "e15157e9-f95b-421e-b675-9b618c249fd6", 00:28:25.647 "assigned_rate_limits": { 00:28:25.647 "rw_ios_per_sec": 0, 00:28:25.647 "rw_mbytes_per_sec": 0, 00:28:25.647 "r_mbytes_per_sec": 0, 00:28:25.647 "w_mbytes_per_sec": 0 00:28:25.647 }, 00:28:25.647 "claimed": false, 00:28:25.647 "zoned": false, 00:28:25.647 "supported_io_types": { 00:28:25.647 "read": true, 00:28:25.647 "write": true, 00:28:25.647 "unmap": false, 00:28:25.647 "flush": false, 00:28:25.647 "reset": true, 00:28:25.647 "nvme_admin": false, 00:28:25.647 "nvme_io": false, 00:28:25.647 "nvme_io_md": false, 00:28:25.647 "write_zeroes": true, 00:28:25.647 "zcopy": false, 00:28:25.647 "get_zone_info": false, 00:28:25.647 "zone_management": false, 00:28:25.647 "zone_append": false, 00:28:25.647 "compare": false, 00:28:25.647 "compare_and_write": false, 00:28:25.647 "abort": false, 00:28:25.647 "seek_hole": false, 00:28:25.647 "seek_data": false, 00:28:25.647 "copy": false, 00:28:25.647 "nvme_iov_md": false 00:28:25.647 }, 00:28:25.647 "memory_domains": [ 00:28:25.647 { 00:28:25.647 "dma_device_id": "system", 00:28:25.647 "dma_device_type": 1 00:28:25.647 }, 00:28:25.647 { 00:28:25.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.647 "dma_device_type": 2 00:28:25.647 }, 00:28:25.647 { 00:28:25.647 "dma_device_id": "system", 00:28:25.647 "dma_device_type": 1 00:28:25.647 }, 00:28:25.647 { 00:28:25.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.647 "dma_device_type": 2 00:28:25.647 }, 00:28:25.648 { 00:28:25.648 "dma_device_id": "system", 00:28:25.648 "dma_device_type": 1 00:28:25.648 }, 00:28:25.648 { 00:28:25.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.648 "dma_device_type": 2 00:28:25.648 }, 00:28:25.648 { 00:28:25.648 "dma_device_id": "system", 00:28:25.648 "dma_device_type": 1 00:28:25.648 }, 00:28:25.648 { 00:28:25.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:25.648 "dma_device_type": 2 00:28:25.648 } 00:28:25.648 ], 00:28:25.648 "driver_specific": { 00:28:25.648 "raid": { 00:28:25.648 "uuid": "e15157e9-f95b-421e-b675-9b618c249fd6", 00:28:25.648 "strip_size_kb": 0, 00:28:25.648 "state": "online", 00:28:25.648 "raid_level": "raid1", 00:28:25.648 "superblock": true, 00:28:25.648 "num_base_bdevs": 4, 00:28:25.648 "num_base_bdevs_discovered": 4, 00:28:25.648 "num_base_bdevs_operational": 4, 00:28:25.648 "base_bdevs_list": [ 00:28:25.648 { 00:28:25.648 "name": "pt1", 00:28:25.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:25.648 "is_configured": true, 00:28:25.648 "data_offset": 2048, 00:28:25.648 "data_size": 63488 00:28:25.648 }, 00:28:25.648 { 00:28:25.648 "name": "pt2", 00:28:25.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:25.648 "is_configured": true, 00:28:25.648 "data_offset": 2048, 00:28:25.648 "data_size": 63488 00:28:25.648 }, 00:28:25.648 { 00:28:25.648 "name": "pt3", 00:28:25.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:25.648 "is_configured": true, 00:28:25.648 "data_offset": 2048, 00:28:25.648 "data_size": 63488 00:28:25.648 }, 00:28:25.648 { 00:28:25.648 "name": "pt4", 00:28:25.648 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:25.648 "is_configured": true, 00:28:25.648 "data_offset": 2048, 00:28:25.649 "data_size": 63488 00:28:25.649 } 00:28:25.649 ] 00:28:25.649 } 00:28:25.649 } 00:28:25.649 }' 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:25.649 pt2 00:28:25.649 pt3 00:28:25.649 pt4' 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:25.649 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:25.650 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:25.650 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:28:25.650 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:25.650 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.650 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.650 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:28:25.917 [2024-10-28 13:39:39.886049] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e15157e9-f95b-421e-b675-9b618c249fd6 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e15157e9-f95b-421e-b675-9b618c249fd6 ']' 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.917 [2024-10-28 13:39:39.937651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:25.917 [2024-10-28 13:39:39.937692] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:25.917 [2024-10-28 13:39:39.937829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:25.917 [2024-10-28 13:39:39.937954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:25.917 [2024-10-28 13:39:39.937984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.917 13:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.917 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.176 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:28:26.176 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:28:26.176 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:28:26.176 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:28:26.176 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:26.176 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:26.176 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:26.176 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:26.176 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:28:26.176 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.176 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.176 [2024-10-28 13:39:40.101771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:26.176 [2024-10-28 13:39:40.104362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:26.176 [2024-10-28 13:39:40.104436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:28:26.176 [2024-10-28 13:39:40.104492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:28:26.176 [2024-10-28 13:39:40.104565] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:26.176 [2024-10-28 13:39:40.104638] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:26.176 [2024-10-28 13:39:40.104676] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:28:26.176 [2024-10-28 13:39:40.104708] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:28:26.176 [2024-10-28 13:39:40.104736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:26.176 [2024-10-28 13:39:40.104754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:28:26.176 request: 00:28:26.176 { 00:28:26.176 "name": "raid_bdev1", 00:28:26.176 "raid_level": "raid1", 00:28:26.176 "base_bdevs": [ 00:28:26.176 "malloc1", 00:28:26.176 "malloc2", 00:28:26.176 "malloc3", 00:28:26.176 "malloc4" 00:28:26.176 ], 00:28:26.176 "superblock": false, 00:28:26.176 "method": "bdev_raid_create", 00:28:26.176 "req_id": 1 00:28:26.176 } 00:28:26.176 Got JSON-RPC error response 00:28:26.176 response: 00:28:26.176 { 00:28:26.176 "code": -17, 00:28:26.176 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:26.176 } 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.177 [2024-10-28 13:39:40.169731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:26.177 [2024-10-28 13:39:40.169817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.177 [2024-10-28 13:39:40.169846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:26.177 [2024-10-28 13:39:40.169863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.177 [2024-10-28 13:39:40.172876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.177 [2024-10-28 13:39:40.172925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:26.177 [2024-10-28 13:39:40.173045] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:26.177 [2024-10-28 13:39:40.173101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:26.177 pt1 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:26.177 "name": "raid_bdev1", 00:28:26.177 "uuid": "e15157e9-f95b-421e-b675-9b618c249fd6", 00:28:26.177 "strip_size_kb": 0, 00:28:26.177 "state": "configuring", 00:28:26.177 "raid_level": "raid1", 00:28:26.177 "superblock": true, 00:28:26.177 "num_base_bdevs": 4, 00:28:26.177 "num_base_bdevs_discovered": 1, 00:28:26.177 "num_base_bdevs_operational": 4, 00:28:26.177 "base_bdevs_list": [ 00:28:26.177 { 00:28:26.177 "name": "pt1", 00:28:26.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:26.177 "is_configured": true, 00:28:26.177 "data_offset": 2048, 00:28:26.177 "data_size": 63488 00:28:26.177 }, 00:28:26.177 { 00:28:26.177 "name": null, 00:28:26.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:26.177 "is_configured": false, 00:28:26.177 "data_offset": 2048, 00:28:26.177 "data_size": 63488 00:28:26.177 }, 00:28:26.177 { 00:28:26.177 "name": null, 00:28:26.177 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:26.177 "is_configured": false, 00:28:26.177 "data_offset": 2048, 00:28:26.177 "data_size": 63488 00:28:26.177 }, 00:28:26.177 { 00:28:26.177 "name": null, 00:28:26.177 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:26.177 "is_configured": false, 00:28:26.177 "data_offset": 2048, 00:28:26.177 "data_size": 63488 00:28:26.177 } 00:28:26.177 ] 00:28:26.177 }' 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:26.177 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.742 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:28:26.742 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:26.742 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.742 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.742 [2024-10-28 13:39:40.705859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:26.742 [2024-10-28 13:39:40.705982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.742 [2024-10-28 13:39:40.706016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:26.742 [2024-10-28 13:39:40.706034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.742 [2024-10-28 13:39:40.706578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.742 [2024-10-28 13:39:40.706621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:26.742 [2024-10-28 13:39:40.706723] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:26.742 [2024-10-28 13:39:40.706771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:26.742 pt2 00:28:26.742 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.742 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:28:26.742 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.742 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.742 [2024-10-28 13:39:40.713840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:28:26.742 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:26.743 "name": "raid_bdev1", 00:28:26.743 "uuid": "e15157e9-f95b-421e-b675-9b618c249fd6", 00:28:26.743 "strip_size_kb": 0, 00:28:26.743 "state": "configuring", 00:28:26.743 "raid_level": "raid1", 00:28:26.743 "superblock": true, 00:28:26.743 "num_base_bdevs": 4, 00:28:26.743 "num_base_bdevs_discovered": 1, 00:28:26.743 "num_base_bdevs_operational": 4, 00:28:26.743 "base_bdevs_list": [ 00:28:26.743 { 00:28:26.743 "name": "pt1", 00:28:26.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:26.743 "is_configured": true, 00:28:26.743 "data_offset": 2048, 00:28:26.743 "data_size": 63488 00:28:26.743 }, 00:28:26.743 { 00:28:26.743 "name": null, 00:28:26.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:26.743 "is_configured": false, 00:28:26.743 "data_offset": 0, 00:28:26.743 "data_size": 63488 00:28:26.743 }, 00:28:26.743 { 00:28:26.743 "name": null, 00:28:26.743 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:26.743 "is_configured": false, 00:28:26.743 "data_offset": 2048, 00:28:26.743 "data_size": 63488 00:28:26.743 }, 00:28:26.743 { 00:28:26.743 "name": null, 00:28:26.743 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:26.743 "is_configured": false, 00:28:26.743 "data_offset": 2048, 00:28:26.743 "data_size": 63488 00:28:26.743 } 00:28:26.743 ] 00:28:26.743 }' 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:26.743 13:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.309 [2024-10-28 13:39:41.237963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:27.309 [2024-10-28 13:39:41.238091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:27.309 [2024-10-28 13:39:41.238122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:28:27.309 [2024-10-28 13:39:41.238136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:27.309 [2024-10-28 13:39:41.238735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:27.309 [2024-10-28 13:39:41.238788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:27.309 [2024-10-28 13:39:41.238922] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:27.309 [2024-10-28 13:39:41.238987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:27.309 pt2 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.309 [2024-10-28 13:39:41.249977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:27.309 [2024-10-28 13:39:41.250098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:27.309 [2024-10-28 13:39:41.250131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:28:27.309 [2024-10-28 13:39:41.250161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:27.309 [2024-10-28 13:39:41.250720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:27.309 [2024-10-28 13:39:41.250777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:27.309 [2024-10-28 13:39:41.250898] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:28:27.309 [2024-10-28 13:39:41.250947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:27.309 pt3 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.309 [2024-10-28 13:39:41.261988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:27.309 [2024-10-28 13:39:41.262089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:27.309 [2024-10-28 13:39:41.262119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:28:27.309 [2024-10-28 13:39:41.262133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:27.309 [2024-10-28 13:39:41.262658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:27.309 [2024-10-28 13:39:41.262700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:27.309 [2024-10-28 13:39:41.262801] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:27.309 [2024-10-28 13:39:41.262834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:27.309 [2024-10-28 13:39:41.262997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:27.309 [2024-10-28 13:39:41.263022] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:27.309 [2024-10-28 13:39:41.263350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:27.309 [2024-10-28 13:39:41.263546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:27.309 [2024-10-28 13:39:41.263576] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:28:27.309 [2024-10-28 13:39:41.263713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:27.309 pt4 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:27.309 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.310 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.310 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.310 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.310 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:27.310 "name": "raid_bdev1", 00:28:27.310 "uuid": "e15157e9-f95b-421e-b675-9b618c249fd6", 00:28:27.310 "strip_size_kb": 0, 00:28:27.310 "state": "online", 00:28:27.310 "raid_level": "raid1", 00:28:27.310 "superblock": true, 00:28:27.310 "num_base_bdevs": 4, 00:28:27.310 "num_base_bdevs_discovered": 4, 00:28:27.310 "num_base_bdevs_operational": 4, 00:28:27.310 "base_bdevs_list": [ 00:28:27.310 { 00:28:27.310 "name": "pt1", 00:28:27.310 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:27.310 "is_configured": true, 00:28:27.310 "data_offset": 2048, 00:28:27.310 "data_size": 63488 00:28:27.310 }, 00:28:27.310 { 00:28:27.310 "name": "pt2", 00:28:27.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:27.310 "is_configured": true, 00:28:27.310 "data_offset": 2048, 00:28:27.310 "data_size": 63488 00:28:27.310 }, 00:28:27.310 { 00:28:27.310 "name": "pt3", 00:28:27.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:27.310 "is_configured": true, 00:28:27.310 "data_offset": 2048, 00:28:27.310 "data_size": 63488 00:28:27.310 }, 00:28:27.310 { 00:28:27.310 "name": "pt4", 00:28:27.310 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:27.310 "is_configured": true, 00:28:27.310 "data_offset": 2048, 00:28:27.310 "data_size": 63488 00:28:27.310 } 00:28:27.310 ] 00:28:27.310 }' 00:28:27.310 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:27.310 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.877 [2024-10-28 13:39:41.802556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:27.877 "name": "raid_bdev1", 00:28:27.877 "aliases": [ 00:28:27.877 "e15157e9-f95b-421e-b675-9b618c249fd6" 00:28:27.877 ], 00:28:27.877 "product_name": "Raid Volume", 00:28:27.877 "block_size": 512, 00:28:27.877 "num_blocks": 63488, 00:28:27.877 "uuid": "e15157e9-f95b-421e-b675-9b618c249fd6", 00:28:27.877 "assigned_rate_limits": { 00:28:27.877 "rw_ios_per_sec": 0, 00:28:27.877 "rw_mbytes_per_sec": 0, 00:28:27.877 "r_mbytes_per_sec": 0, 00:28:27.877 "w_mbytes_per_sec": 0 00:28:27.877 }, 00:28:27.877 "claimed": false, 00:28:27.877 "zoned": false, 00:28:27.877 "supported_io_types": { 00:28:27.877 "read": true, 00:28:27.877 "write": true, 00:28:27.877 "unmap": false, 00:28:27.877 "flush": false, 00:28:27.877 "reset": true, 00:28:27.877 "nvme_admin": false, 00:28:27.877 "nvme_io": false, 00:28:27.877 "nvme_io_md": false, 00:28:27.877 "write_zeroes": true, 00:28:27.877 "zcopy": false, 00:28:27.877 "get_zone_info": false, 00:28:27.877 "zone_management": false, 00:28:27.877 "zone_append": false, 00:28:27.877 "compare": false, 00:28:27.877 "compare_and_write": false, 00:28:27.877 "abort": false, 00:28:27.877 "seek_hole": false, 00:28:27.877 "seek_data": false, 00:28:27.877 "copy": false, 00:28:27.877 "nvme_iov_md": false 00:28:27.877 }, 00:28:27.877 "memory_domains": [ 00:28:27.877 { 00:28:27.877 "dma_device_id": "system", 00:28:27.877 "dma_device_type": 1 00:28:27.877 }, 00:28:27.877 { 00:28:27.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:27.877 "dma_device_type": 2 00:28:27.877 }, 00:28:27.877 { 00:28:27.877 "dma_device_id": "system", 00:28:27.877 "dma_device_type": 1 00:28:27.877 }, 00:28:27.877 { 00:28:27.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:27.877 "dma_device_type": 2 00:28:27.877 }, 00:28:27.877 { 00:28:27.877 "dma_device_id": "system", 00:28:27.877 "dma_device_type": 1 00:28:27.877 }, 00:28:27.877 { 00:28:27.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:27.877 "dma_device_type": 2 00:28:27.877 }, 00:28:27.877 { 00:28:27.877 "dma_device_id": "system", 00:28:27.877 "dma_device_type": 1 00:28:27.877 }, 00:28:27.877 { 00:28:27.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:27.877 "dma_device_type": 2 00:28:27.877 } 00:28:27.877 ], 00:28:27.877 "driver_specific": { 00:28:27.877 "raid": { 00:28:27.877 "uuid": "e15157e9-f95b-421e-b675-9b618c249fd6", 00:28:27.877 "strip_size_kb": 0, 00:28:27.877 "state": "online", 00:28:27.877 "raid_level": "raid1", 00:28:27.877 "superblock": true, 00:28:27.877 "num_base_bdevs": 4, 00:28:27.877 "num_base_bdevs_discovered": 4, 00:28:27.877 "num_base_bdevs_operational": 4, 00:28:27.877 "base_bdevs_list": [ 00:28:27.877 { 00:28:27.877 "name": "pt1", 00:28:27.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:27.877 "is_configured": true, 00:28:27.877 "data_offset": 2048, 00:28:27.877 "data_size": 63488 00:28:27.877 }, 00:28:27.877 { 00:28:27.877 "name": "pt2", 00:28:27.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:27.877 "is_configured": true, 00:28:27.877 "data_offset": 2048, 00:28:27.877 "data_size": 63488 00:28:27.877 }, 00:28:27.877 { 00:28:27.877 "name": "pt3", 00:28:27.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:27.877 "is_configured": true, 00:28:27.877 "data_offset": 2048, 00:28:27.877 "data_size": 63488 00:28:27.877 }, 00:28:27.877 { 00:28:27.877 "name": "pt4", 00:28:27.877 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:27.877 "is_configured": true, 00:28:27.877 "data_offset": 2048, 00:28:27.877 "data_size": 63488 00:28:27.877 } 00:28:27.877 ] 00:28:27.877 } 00:28:27.877 } 00:28:27.877 }' 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:27.877 pt2 00:28:27.877 pt3 00:28:27.877 pt4' 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:27.877 13:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:27.877 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:27.877 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.877 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.877 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:27.877 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.138 [2024-10-28 13:39:42.182614] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e15157e9-f95b-421e-b675-9b618c249fd6 '!=' e15157e9-f95b-421e-b675-9b618c249fd6 ']' 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.138 [2024-10-28 13:39:42.230351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.138 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.396 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:28.396 "name": "raid_bdev1", 00:28:28.396 "uuid": "e15157e9-f95b-421e-b675-9b618c249fd6", 00:28:28.396 "strip_size_kb": 0, 00:28:28.396 "state": "online", 00:28:28.396 "raid_level": "raid1", 00:28:28.396 "superblock": true, 00:28:28.396 "num_base_bdevs": 4, 00:28:28.396 "num_base_bdevs_discovered": 3, 00:28:28.396 "num_base_bdevs_operational": 3, 00:28:28.396 "base_bdevs_list": [ 00:28:28.396 { 00:28:28.396 "name": null, 00:28:28.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.396 "is_configured": false, 00:28:28.396 "data_offset": 0, 00:28:28.396 "data_size": 63488 00:28:28.396 }, 00:28:28.396 { 00:28:28.396 "name": "pt2", 00:28:28.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:28.396 "is_configured": true, 00:28:28.396 "data_offset": 2048, 00:28:28.396 "data_size": 63488 00:28:28.396 }, 00:28:28.396 { 00:28:28.396 "name": "pt3", 00:28:28.396 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:28.396 "is_configured": true, 00:28:28.396 "data_offset": 2048, 00:28:28.396 "data_size": 63488 00:28:28.396 }, 00:28:28.396 { 00:28:28.396 "name": "pt4", 00:28:28.396 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:28.396 "is_configured": true, 00:28:28.396 "data_offset": 2048, 00:28:28.396 "data_size": 63488 00:28:28.396 } 00:28:28.396 ] 00:28:28.396 }' 00:28:28.396 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:28.396 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.654 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:28.654 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.654 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.654 [2024-10-28 13:39:42.774388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:28.654 [2024-10-28 13:39:42.774432] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:28.654 [2024-10-28 13:39:42.774570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:28.654 [2024-10-28 13:39:42.774698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:28.654 [2024-10-28 13:39:42.774720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:28:28.654 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.654 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:28.654 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.654 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.654 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:28:28.654 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.913 [2024-10-28 13:39:42.874416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:28.913 [2024-10-28 13:39:42.874488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:28.913 [2024-10-28 13:39:42.874519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:28:28.913 [2024-10-28 13:39:42.874533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:28.913 [2024-10-28 13:39:42.877468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:28.913 [2024-10-28 13:39:42.877513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:28.913 [2024-10-28 13:39:42.877616] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:28.913 [2024-10-28 13:39:42.877666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:28.913 pt2 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.913 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:28.913 "name": "raid_bdev1", 00:28:28.913 "uuid": "e15157e9-f95b-421e-b675-9b618c249fd6", 00:28:28.913 "strip_size_kb": 0, 00:28:28.913 "state": "configuring", 00:28:28.913 "raid_level": "raid1", 00:28:28.913 "superblock": true, 00:28:28.913 "num_base_bdevs": 4, 00:28:28.913 "num_base_bdevs_discovered": 1, 00:28:28.913 "num_base_bdevs_operational": 3, 00:28:28.913 "base_bdevs_list": [ 00:28:28.913 { 00:28:28.913 "name": null, 00:28:28.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.913 "is_configured": false, 00:28:28.913 "data_offset": 2048, 00:28:28.913 "data_size": 63488 00:28:28.913 }, 00:28:28.913 { 00:28:28.914 "name": "pt2", 00:28:28.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:28.914 "is_configured": true, 00:28:28.914 "data_offset": 2048, 00:28:28.914 "data_size": 63488 00:28:28.914 }, 00:28:28.914 { 00:28:28.914 "name": null, 00:28:28.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:28.914 "is_configured": false, 00:28:28.914 "data_offset": 2048, 00:28:28.914 "data_size": 63488 00:28:28.914 }, 00:28:28.914 { 00:28:28.914 "name": null, 00:28:28.914 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:28.914 "is_configured": false, 00:28:28.914 "data_offset": 2048, 00:28:28.914 "data_size": 63488 00:28:28.914 } 00:28:28.914 ] 00:28:28.914 }' 00:28:28.914 13:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:28.914 13:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:29.478 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:28:29.478 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:29.478 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:29.478 13:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.478 13:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:29.478 [2024-10-28 13:39:43.406745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:29.478 [2024-10-28 13:39:43.406851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:29.478 [2024-10-28 13:39:43.406889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:28:29.478 [2024-10-28 13:39:43.406904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:29.478 [2024-10-28 13:39:43.407486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:29.478 [2024-10-28 13:39:43.407532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:29.478 [2024-10-28 13:39:43.407639] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:28:29.478 [2024-10-28 13:39:43.407673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:29.478 pt3 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:29.479 "name": "raid_bdev1", 00:28:29.479 "uuid": "e15157e9-f95b-421e-b675-9b618c249fd6", 00:28:29.479 "strip_size_kb": 0, 00:28:29.479 "state": "configuring", 00:28:29.479 "raid_level": "raid1", 00:28:29.479 "superblock": true, 00:28:29.479 "num_base_bdevs": 4, 00:28:29.479 "num_base_bdevs_discovered": 2, 00:28:29.479 "num_base_bdevs_operational": 3, 00:28:29.479 "base_bdevs_list": [ 00:28:29.479 { 00:28:29.479 "name": null, 00:28:29.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.479 "is_configured": false, 00:28:29.479 "data_offset": 2048, 00:28:29.479 "data_size": 63488 00:28:29.479 }, 00:28:29.479 { 00:28:29.479 "name": "pt2", 00:28:29.479 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:29.479 "is_configured": true, 00:28:29.479 "data_offset": 2048, 00:28:29.479 "data_size": 63488 00:28:29.479 }, 00:28:29.479 { 00:28:29.479 "name": "pt3", 00:28:29.479 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:29.479 "is_configured": true, 00:28:29.479 "data_offset": 2048, 00:28:29.479 "data_size": 63488 00:28:29.479 }, 00:28:29.479 { 00:28:29.479 "name": null, 00:28:29.479 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:29.479 "is_configured": false, 00:28:29.479 "data_offset": 2048, 00:28:29.479 "data_size": 63488 00:28:29.479 } 00:28:29.479 ] 00:28:29.479 }' 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:29.479 13:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.044 [2024-10-28 13:39:43.962883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:30.044 [2024-10-28 13:39:43.962989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.044 [2024-10-28 13:39:43.963023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:28:30.044 [2024-10-28 13:39:43.963038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.044 [2024-10-28 13:39:43.963638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.044 [2024-10-28 13:39:43.963674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:30.044 [2024-10-28 13:39:43.963778] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:30.044 [2024-10-28 13:39:43.963819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:30.044 [2024-10-28 13:39:43.963978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:30.044 [2024-10-28 13:39:43.964004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:30.044 [2024-10-28 13:39:43.964328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:28:30.044 [2024-10-28 13:39:43.964507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:30.044 [2024-10-28 13:39:43.964539] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:28:30.044 [2024-10-28 13:39:43.964677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:30.044 pt4 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.044 13:39:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.044 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:30.044 "name": "raid_bdev1", 00:28:30.044 "uuid": "e15157e9-f95b-421e-b675-9b618c249fd6", 00:28:30.044 "strip_size_kb": 0, 00:28:30.044 "state": "online", 00:28:30.044 "raid_level": "raid1", 00:28:30.044 "superblock": true, 00:28:30.044 "num_base_bdevs": 4, 00:28:30.044 "num_base_bdevs_discovered": 3, 00:28:30.044 "num_base_bdevs_operational": 3, 00:28:30.044 "base_bdevs_list": [ 00:28:30.044 { 00:28:30.044 "name": null, 00:28:30.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.044 "is_configured": false, 00:28:30.044 "data_offset": 2048, 00:28:30.044 "data_size": 63488 00:28:30.044 }, 00:28:30.044 { 00:28:30.044 "name": "pt2", 00:28:30.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:30.044 "is_configured": true, 00:28:30.044 "data_offset": 2048, 00:28:30.044 "data_size": 63488 00:28:30.044 }, 00:28:30.044 { 00:28:30.044 "name": "pt3", 00:28:30.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:30.044 "is_configured": true, 00:28:30.044 "data_offset": 2048, 00:28:30.044 "data_size": 63488 00:28:30.044 }, 00:28:30.044 { 00:28:30.044 "name": "pt4", 00:28:30.044 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:30.044 "is_configured": true, 00:28:30.044 "data_offset": 2048, 00:28:30.044 "data_size": 63488 00:28:30.044 } 00:28:30.044 ] 00:28:30.044 }' 00:28:30.044 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:30.044 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.610 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:30.610 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.610 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.610 [2024-10-28 13:39:44.510996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:30.611 [2024-10-28 13:39:44.511049] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:30.611 [2024-10-28 13:39:44.511164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:30.611 [2024-10-28 13:39:44.511300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:30.611 [2024-10-28 13:39:44.511321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.611 [2024-10-28 13:39:44.602984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:30.611 [2024-10-28 13:39:44.603106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.611 [2024-10-28 13:39:44.603133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:28:30.611 [2024-10-28 13:39:44.603179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.611 [2024-10-28 13:39:44.606433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.611 [2024-10-28 13:39:44.606538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:30.611 [2024-10-28 13:39:44.606641] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:30.611 [2024-10-28 13:39:44.606692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:30.611 [2024-10-28 13:39:44.606838] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:30.611 [2024-10-28 13:39:44.606861] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:30.611 [2024-10-28 13:39:44.606892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:28:30.611 [2024-10-28 13:39:44.606946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:30.611 [2024-10-28 13:39:44.607110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:30.611 pt1 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:30.611 "name": "raid_bdev1", 00:28:30.611 "uuid": "e15157e9-f95b-421e-b675-9b618c249fd6", 00:28:30.611 "strip_size_kb": 0, 00:28:30.611 "state": "configuring", 00:28:30.611 "raid_level": "raid1", 00:28:30.611 "superblock": true, 00:28:30.611 "num_base_bdevs": 4, 00:28:30.611 "num_base_bdevs_discovered": 2, 00:28:30.611 "num_base_bdevs_operational": 3, 00:28:30.611 "base_bdevs_list": [ 00:28:30.611 { 00:28:30.611 "name": null, 00:28:30.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.611 "is_configured": false, 00:28:30.611 "data_offset": 2048, 00:28:30.611 "data_size": 63488 00:28:30.611 }, 00:28:30.611 { 00:28:30.611 "name": "pt2", 00:28:30.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:30.611 "is_configured": true, 00:28:30.611 "data_offset": 2048, 00:28:30.611 "data_size": 63488 00:28:30.611 }, 00:28:30.611 { 00:28:30.611 "name": "pt3", 00:28:30.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:30.611 "is_configured": true, 00:28:30.611 "data_offset": 2048, 00:28:30.611 "data_size": 63488 00:28:30.611 }, 00:28:30.611 { 00:28:30.611 "name": null, 00:28:30.611 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:30.611 "is_configured": false, 00:28:30.611 "data_offset": 2048, 00:28:30.611 "data_size": 63488 00:28:30.611 } 00:28:30.611 ] 00:28:30.611 }' 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:30.611 13:39:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.177 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:28:31.177 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:31.177 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.177 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.177 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.177 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:28:31.177 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:31.177 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.177 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.177 [2024-10-28 13:39:45.203364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:31.178 [2024-10-28 13:39:45.203439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:31.178 [2024-10-28 13:39:45.203487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:28:31.178 [2024-10-28 13:39:45.203511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:31.178 [2024-10-28 13:39:45.204073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:31.178 [2024-10-28 13:39:45.204122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:31.178 [2024-10-28 13:39:45.204248] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:28:31.178 [2024-10-28 13:39:45.204298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:31.178 [2024-10-28 13:39:45.204447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:28:31.178 [2024-10-28 13:39:45.204488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:31.178 [2024-10-28 13:39:45.204817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:28:31.178 [2024-10-28 13:39:45.204976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:28:31.178 [2024-10-28 13:39:45.205002] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:28:31.178 [2024-10-28 13:39:45.205192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:31.178 pt4 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:31.178 "name": "raid_bdev1", 00:28:31.178 "uuid": "e15157e9-f95b-421e-b675-9b618c249fd6", 00:28:31.178 "strip_size_kb": 0, 00:28:31.178 "state": "online", 00:28:31.178 "raid_level": "raid1", 00:28:31.178 "superblock": true, 00:28:31.178 "num_base_bdevs": 4, 00:28:31.178 "num_base_bdevs_discovered": 3, 00:28:31.178 "num_base_bdevs_operational": 3, 00:28:31.178 "base_bdevs_list": [ 00:28:31.178 { 00:28:31.178 "name": null, 00:28:31.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.178 "is_configured": false, 00:28:31.178 "data_offset": 2048, 00:28:31.178 "data_size": 63488 00:28:31.178 }, 00:28:31.178 { 00:28:31.178 "name": "pt2", 00:28:31.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:31.178 "is_configured": true, 00:28:31.178 "data_offset": 2048, 00:28:31.178 "data_size": 63488 00:28:31.178 }, 00:28:31.178 { 00:28:31.178 "name": "pt3", 00:28:31.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:31.178 "is_configured": true, 00:28:31.178 "data_offset": 2048, 00:28:31.178 "data_size": 63488 00:28:31.178 }, 00:28:31.178 { 00:28:31.178 "name": "pt4", 00:28:31.178 "uuid": "00000000-0000-0000-0000-000000000004", 00:28:31.178 "is_configured": true, 00:28:31.178 "data_offset": 2048, 00:28:31.178 "data_size": 63488 00:28:31.178 } 00:28:31.178 ] 00:28:31.178 }' 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:31.178 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.786 [2024-10-28 13:39:45.807954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e15157e9-f95b-421e-b675-9b618c249fd6 '!=' e15157e9-f95b-421e-b675-9b618c249fd6 ']' 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 87248 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 87248 ']' 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 87248 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87248 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:31.786 killing process with pid 87248 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87248' 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 87248 00:28:31.786 [2024-10-28 13:39:45.890620] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:31.786 13:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 87248 00:28:31.786 [2024-10-28 13:39:45.890732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:31.786 [2024-10-28 13:39:45.890832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:31.786 [2024-10-28 13:39:45.890855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:28:31.786 [2024-10-28 13:39:45.933861] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:32.044 13:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:28:32.044 00:28:32.044 real 0m8.331s 00:28:32.044 user 0m14.498s 00:28:32.044 sys 0m1.374s 00:28:32.044 13:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:32.044 13:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.044 ************************************ 00:28:32.044 END TEST raid_superblock_test 00:28:32.044 ************************************ 00:28:32.302 13:39:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:28:32.302 13:39:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:28:32.302 13:39:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:32.302 13:39:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:32.302 ************************************ 00:28:32.302 START TEST raid_read_error_test 00:28:32.302 ************************************ 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:28:32.302 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xSA3C1v8JC 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87730 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87730 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 87730 ']' 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:32.303 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.303 [2024-10-28 13:39:46.343969] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:28:32.303 [2024-10-28 13:39:46.344206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87730 ] 00:28:32.561 [2024-10-28 13:39:46.496999] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:32.561 [2024-10-28 13:39:46.528885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.561 [2024-10-28 13:39:46.577711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.561 [2024-10-28 13:39:46.639050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:32.561 [2024-10-28 13:39:46.639100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:32.561 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:32.561 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:28:32.561 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:32.561 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:32.561 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.561 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.820 BaseBdev1_malloc 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.820 true 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.820 [2024-10-28 13:39:46.742646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:32.820 [2024-10-28 13:39:46.742738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:32.820 [2024-10-28 13:39:46.742763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:32.820 [2024-10-28 13:39:46.742782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:32.820 [2024-10-28 13:39:46.745836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:32.820 [2024-10-28 13:39:46.745903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:32.820 BaseBdev1 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.820 BaseBdev2_malloc 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.820 true 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.820 [2024-10-28 13:39:46.775153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:32.820 [2024-10-28 13:39:46.775250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:32.820 [2024-10-28 13:39:46.775275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:32.820 [2024-10-28 13:39:46.775292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:32.820 [2024-10-28 13:39:46.778189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:32.820 [2024-10-28 13:39:46.778280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:32.820 BaseBdev2 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.820 BaseBdev3_malloc 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.820 true 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.820 [2024-10-28 13:39:46.814747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:28:32.820 [2024-10-28 13:39:46.814839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:32.820 [2024-10-28 13:39:46.814866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:28:32.820 [2024-10-28 13:39:46.814883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:32.820 [2024-10-28 13:39:46.817812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:32.820 [2024-10-28 13:39:46.817888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:32.820 BaseBdev3 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:32.820 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.821 BaseBdev4_malloc 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.821 true 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.821 [2024-10-28 13:39:46.859629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:28:32.821 [2024-10-28 13:39:46.859701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:32.821 [2024-10-28 13:39:46.859728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:32.821 [2024-10-28 13:39:46.859746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:32.821 [2024-10-28 13:39:46.862451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:32.821 [2024-10-28 13:39:46.862522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:32.821 BaseBdev4 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.821 [2024-10-28 13:39:46.867663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:32.821 [2024-10-28 13:39:46.870118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:32.821 [2024-10-28 13:39:46.870243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:32.821 [2024-10-28 13:39:46.870324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:32.821 [2024-10-28 13:39:46.870628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:32.821 [2024-10-28 13:39:46.870660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:32.821 [2024-10-28 13:39:46.870992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:28:32.821 [2024-10-28 13:39:46.871256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:32.821 [2024-10-28 13:39:46.871284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:28:32.821 [2024-10-28 13:39:46.871458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:32.821 "name": "raid_bdev1", 00:28:32.821 "uuid": "ecd152e1-bf73-4610-b19b-90f481f5ebf6", 00:28:32.821 "strip_size_kb": 0, 00:28:32.821 "state": "online", 00:28:32.821 "raid_level": "raid1", 00:28:32.821 "superblock": true, 00:28:32.821 "num_base_bdevs": 4, 00:28:32.821 "num_base_bdevs_discovered": 4, 00:28:32.821 "num_base_bdevs_operational": 4, 00:28:32.821 "base_bdevs_list": [ 00:28:32.821 { 00:28:32.821 "name": "BaseBdev1", 00:28:32.821 "uuid": "f1423a4f-f145-5482-82ad-83cbaef27fb6", 00:28:32.821 "is_configured": true, 00:28:32.821 "data_offset": 2048, 00:28:32.821 "data_size": 63488 00:28:32.821 }, 00:28:32.821 { 00:28:32.821 "name": "BaseBdev2", 00:28:32.821 "uuid": "bae48cc6-4ad9-55ac-b289-ab72d5360afe", 00:28:32.821 "is_configured": true, 00:28:32.821 "data_offset": 2048, 00:28:32.821 "data_size": 63488 00:28:32.821 }, 00:28:32.821 { 00:28:32.821 "name": "BaseBdev3", 00:28:32.821 "uuid": "4bd85122-0ea4-5f06-a4bc-f1b48247f6bb", 00:28:32.821 "is_configured": true, 00:28:32.821 "data_offset": 2048, 00:28:32.821 "data_size": 63488 00:28:32.821 }, 00:28:32.821 { 00:28:32.821 "name": "BaseBdev4", 00:28:32.821 "uuid": "59127062-d82f-57fe-b7c0-9b52c87c25b6", 00:28:32.821 "is_configured": true, 00:28:32.821 "data_offset": 2048, 00:28:32.821 "data_size": 63488 00:28:32.821 } 00:28:32.821 ] 00:28:32.821 }' 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:32.821 13:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.387 13:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:28:33.387 13:39:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:33.646 [2024-10-28 13:39:47.552595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:34.583 "name": "raid_bdev1", 00:28:34.583 "uuid": "ecd152e1-bf73-4610-b19b-90f481f5ebf6", 00:28:34.583 "strip_size_kb": 0, 00:28:34.583 "state": "online", 00:28:34.583 "raid_level": "raid1", 00:28:34.583 "superblock": true, 00:28:34.583 "num_base_bdevs": 4, 00:28:34.583 "num_base_bdevs_discovered": 4, 00:28:34.583 "num_base_bdevs_operational": 4, 00:28:34.583 "base_bdevs_list": [ 00:28:34.583 { 00:28:34.583 "name": "BaseBdev1", 00:28:34.583 "uuid": "f1423a4f-f145-5482-82ad-83cbaef27fb6", 00:28:34.583 "is_configured": true, 00:28:34.583 "data_offset": 2048, 00:28:34.583 "data_size": 63488 00:28:34.583 }, 00:28:34.583 { 00:28:34.583 "name": "BaseBdev2", 00:28:34.583 "uuid": "bae48cc6-4ad9-55ac-b289-ab72d5360afe", 00:28:34.583 "is_configured": true, 00:28:34.583 "data_offset": 2048, 00:28:34.583 "data_size": 63488 00:28:34.583 }, 00:28:34.583 { 00:28:34.583 "name": "BaseBdev3", 00:28:34.583 "uuid": "4bd85122-0ea4-5f06-a4bc-f1b48247f6bb", 00:28:34.583 "is_configured": true, 00:28:34.583 "data_offset": 2048, 00:28:34.583 "data_size": 63488 00:28:34.583 }, 00:28:34.583 { 00:28:34.583 "name": "BaseBdev4", 00:28:34.583 "uuid": "59127062-d82f-57fe-b7c0-9b52c87c25b6", 00:28:34.583 "is_configured": true, 00:28:34.583 "data_offset": 2048, 00:28:34.583 "data_size": 63488 00:28:34.583 } 00:28:34.583 ] 00:28:34.583 }' 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:34.583 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.841 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:34.841 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.841 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.841 [2024-10-28 13:39:48.979525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:34.841 [2024-10-28 13:39:48.979568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:34.841 [2024-10-28 13:39:48.982881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:34.841 [2024-10-28 13:39:48.982978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:34.841 [2024-10-28 13:39:48.983172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:34.841 [2024-10-28 13:39:48.983198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:28:34.841 { 00:28:34.841 "results": [ 00:28:34.841 { 00:28:34.841 "job": "raid_bdev1", 00:28:34.841 "core_mask": "0x1", 00:28:34.841 "workload": "randrw", 00:28:34.842 "percentage": 50, 00:28:34.842 "status": "finished", 00:28:34.842 "queue_depth": 1, 00:28:34.842 "io_size": 131072, 00:28:34.842 "runtime": 1.423841, 00:28:34.842 "iops": 8203.865459696694, 00:28:34.842 "mibps": 1025.4831824620867, 00:28:34.842 "io_failed": 0, 00:28:34.842 "io_timeout": 0, 00:28:34.842 "avg_latency_us": 117.8488292565238, 00:28:34.842 "min_latency_us": 36.305454545454545, 00:28:34.842 "max_latency_us": 1966.08 00:28:34.842 } 00:28:34.842 ], 00:28:34.842 "core_count": 1 00:28:34.842 } 00:28:34.842 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.842 13:39:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87730 00:28:34.842 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 87730 ']' 00:28:34.842 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 87730 00:28:34.842 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:28:34.842 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:34.842 13:39:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87730 00:28:35.101 13:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:35.101 13:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:35.101 killing process with pid 87730 00:28:35.101 13:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87730' 00:28:35.101 13:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 87730 00:28:35.101 [2024-10-28 13:39:49.029939] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:35.101 13:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 87730 00:28:35.101 [2024-10-28 13:39:49.064777] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:35.359 13:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xSA3C1v8JC 00:28:35.359 13:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:28:35.359 13:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:28:35.359 13:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:28:35.359 13:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:28:35.359 13:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:35.359 13:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:28:35.359 13:39:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:28:35.359 00:28:35.359 real 0m3.093s 00:28:35.359 user 0m4.133s 00:28:35.359 sys 0m0.571s 00:28:35.359 13:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:35.359 ************************************ 00:28:35.359 END TEST raid_read_error_test 00:28:35.359 13:39:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.359 ************************************ 00:28:35.359 13:39:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:28:35.359 13:39:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:28:35.359 13:39:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:35.359 13:39:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:35.359 ************************************ 00:28:35.359 START TEST raid_write_error_test 00:28:35.359 ************************************ 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rSQOTlZqYg 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87857 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87857 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 87857 ']' 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:35.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:35.359 13:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.359 [2024-10-28 13:39:49.501197] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:28:35.359 [2024-10-28 13:39:49.501419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87857 ] 00:28:35.618 [2024-10-28 13:39:49.657065] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:35.618 [2024-10-28 13:39:49.679856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.618 [2024-10-28 13:39:49.725285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.877 [2024-10-28 13:39:49.793046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:35.877 [2024-10-28 13:39:49.793124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:36.443 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:36.443 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:28:36.443 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:36.443 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:36.443 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.443 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.443 BaseBdev1_malloc 00:28:36.443 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.443 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:28:36.443 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.443 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.444 true 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.444 [2024-10-28 13:39:50.527090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:36.444 [2024-10-28 13:39:50.527168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:36.444 [2024-10-28 13:39:50.527195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:36.444 [2024-10-28 13:39:50.527216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:36.444 [2024-10-28 13:39:50.530090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:36.444 [2024-10-28 13:39:50.530156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:36.444 BaseBdev1 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.444 BaseBdev2_malloc 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.444 true 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.444 [2024-10-28 13:39:50.558817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:36.444 [2024-10-28 13:39:50.558877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:36.444 [2024-10-28 13:39:50.558901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:36.444 [2024-10-28 13:39:50.558919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:36.444 [2024-10-28 13:39:50.561880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:36.444 [2024-10-28 13:39:50.561931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:36.444 BaseBdev2 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.444 BaseBdev3_malloc 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.444 true 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.444 [2024-10-28 13:39:50.590639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:28:36.444 [2024-10-28 13:39:50.590700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:36.444 [2024-10-28 13:39:50.590726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:28:36.444 [2024-10-28 13:39:50.590743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:36.444 [2024-10-28 13:39:50.593759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:36.444 [2024-10-28 13:39:50.593807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:36.444 BaseBdev3 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.444 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.702 BaseBdev4_malloc 00:28:36.702 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.702 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:28:36.702 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.702 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.702 true 00:28:36.702 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.702 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:28:36.702 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.702 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.703 [2024-10-28 13:39:50.630427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:28:36.703 [2024-10-28 13:39:50.630495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:36.703 [2024-10-28 13:39:50.630536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:36.703 [2024-10-28 13:39:50.630554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:36.703 [2024-10-28 13:39:50.633465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:36.703 [2024-10-28 13:39:50.633528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:36.703 BaseBdev4 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.703 [2024-10-28 13:39:50.638466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:36.703 [2024-10-28 13:39:50.640999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:36.703 [2024-10-28 13:39:50.641103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:36.703 [2024-10-28 13:39:50.641209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:36.703 [2024-10-28 13:39:50.641494] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:36.703 [2024-10-28 13:39:50.641545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:36.703 [2024-10-28 13:39:50.641871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:28:36.703 [2024-10-28 13:39:50.642076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:36.703 [2024-10-28 13:39:50.642101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:28:36.703 [2024-10-28 13:39:50.642292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:36.703 "name": "raid_bdev1", 00:28:36.703 "uuid": "dc529cca-14c1-4ae2-b724-3533d9664358", 00:28:36.703 "strip_size_kb": 0, 00:28:36.703 "state": "online", 00:28:36.703 "raid_level": "raid1", 00:28:36.703 "superblock": true, 00:28:36.703 "num_base_bdevs": 4, 00:28:36.703 "num_base_bdevs_discovered": 4, 00:28:36.703 "num_base_bdevs_operational": 4, 00:28:36.703 "base_bdevs_list": [ 00:28:36.703 { 00:28:36.703 "name": "BaseBdev1", 00:28:36.703 "uuid": "d529728d-7915-5150-9d10-12d86bf63b91", 00:28:36.703 "is_configured": true, 00:28:36.703 "data_offset": 2048, 00:28:36.703 "data_size": 63488 00:28:36.703 }, 00:28:36.703 { 00:28:36.703 "name": "BaseBdev2", 00:28:36.703 "uuid": "b419301d-1e1f-5bb5-843a-5802b8ef533e", 00:28:36.703 "is_configured": true, 00:28:36.703 "data_offset": 2048, 00:28:36.703 "data_size": 63488 00:28:36.703 }, 00:28:36.703 { 00:28:36.703 "name": "BaseBdev3", 00:28:36.703 "uuid": "d1e2aabb-2544-5216-b04f-cafbdfdc104d", 00:28:36.703 "is_configured": true, 00:28:36.703 "data_offset": 2048, 00:28:36.703 "data_size": 63488 00:28:36.703 }, 00:28:36.703 { 00:28:36.703 "name": "BaseBdev4", 00:28:36.703 "uuid": "0e88fa75-2641-5c86-8939-48b40390bd79", 00:28:36.703 "is_configured": true, 00:28:36.703 "data_offset": 2048, 00:28:36.703 "data_size": 63488 00:28:36.703 } 00:28:36.703 ] 00:28:36.703 }' 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:36.703 13:39:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.284 13:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:28:37.284 13:39:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:37.284 [2024-10-28 13:39:51.271388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.218 [2024-10-28 13:39:52.170396] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:28:38.218 [2024-10-28 13:39:52.170491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:38.218 [2024-10-28 13:39:52.170824] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006490 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.218 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:38.218 "name": "raid_bdev1", 00:28:38.218 "uuid": "dc529cca-14c1-4ae2-b724-3533d9664358", 00:28:38.218 "strip_size_kb": 0, 00:28:38.218 "state": "online", 00:28:38.218 "raid_level": "raid1", 00:28:38.218 "superblock": true, 00:28:38.218 "num_base_bdevs": 4, 00:28:38.218 "num_base_bdevs_discovered": 3, 00:28:38.218 "num_base_bdevs_operational": 3, 00:28:38.218 "base_bdevs_list": [ 00:28:38.218 { 00:28:38.218 "name": null, 00:28:38.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.219 "is_configured": false, 00:28:38.219 "data_offset": 0, 00:28:38.219 "data_size": 63488 00:28:38.219 }, 00:28:38.219 { 00:28:38.219 "name": "BaseBdev2", 00:28:38.219 "uuid": "b419301d-1e1f-5bb5-843a-5802b8ef533e", 00:28:38.219 "is_configured": true, 00:28:38.219 "data_offset": 2048, 00:28:38.219 "data_size": 63488 00:28:38.219 }, 00:28:38.219 { 00:28:38.219 "name": "BaseBdev3", 00:28:38.219 "uuid": "d1e2aabb-2544-5216-b04f-cafbdfdc104d", 00:28:38.219 "is_configured": true, 00:28:38.219 "data_offset": 2048, 00:28:38.219 "data_size": 63488 00:28:38.219 }, 00:28:38.219 { 00:28:38.219 "name": "BaseBdev4", 00:28:38.219 "uuid": "0e88fa75-2641-5c86-8939-48b40390bd79", 00:28:38.219 "is_configured": true, 00:28:38.219 "data_offset": 2048, 00:28:38.219 "data_size": 63488 00:28:38.219 } 00:28:38.219 ] 00:28:38.219 }' 00:28:38.219 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:38.219 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.785 [2024-10-28 13:39:52.715074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:38.785 [2024-10-28 13:39:52.715114] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:38.785 [2024-10-28 13:39:52.718485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:38.785 [2024-10-28 13:39:52.718591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:38.785 [2024-10-28 13:39:52.718733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:38.785 [2024-10-28 13:39:52.718752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:28:38.785 { 00:28:38.785 "results": [ 00:28:38.785 { 00:28:38.785 "job": "raid_bdev1", 00:28:38.785 "core_mask": "0x1", 00:28:38.785 "workload": "randrw", 00:28:38.785 "percentage": 50, 00:28:38.785 "status": "finished", 00:28:38.785 "queue_depth": 1, 00:28:38.785 "io_size": 131072, 00:28:38.785 "runtime": 1.440898, 00:28:38.785 "iops": 8106.750096120613, 00:28:38.785 "mibps": 1013.3437620150767, 00:28:38.785 "io_failed": 0, 00:28:38.785 "io_timeout": 0, 00:28:38.785 "avg_latency_us": 118.87343891790086, 00:28:38.785 "min_latency_us": 37.70181818181818, 00:28:38.785 "max_latency_us": 1995.8690909090908 00:28:38.785 } 00:28:38.785 ], 00:28:38.785 "core_count": 1 00:28:38.785 } 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87857 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 87857 ']' 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 87857 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87857 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:38.785 killing process with pid 87857 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87857' 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 87857 00:28:38.785 [2024-10-28 13:39:52.757452] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:38.785 13:39:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 87857 00:28:38.785 [2024-10-28 13:39:52.801541] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:39.044 13:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rSQOTlZqYg 00:28:39.044 13:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:28:39.044 13:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:28:39.044 13:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:28:39.044 13:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:28:39.044 13:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:39.044 13:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:28:39.044 13:39:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:28:39.044 00:28:39.044 real 0m3.702s 00:28:39.044 user 0m4.872s 00:28:39.044 sys 0m0.592s 00:28:39.044 13:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:39.044 13:39:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.044 ************************************ 00:28:39.044 END TEST raid_write_error_test 00:28:39.044 ************************************ 00:28:39.044 13:39:53 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:28:39.044 13:39:53 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:28:39.044 13:39:53 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:28:39.044 13:39:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:28:39.044 13:39:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:39.044 13:39:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:39.044 ************************************ 00:28:39.044 START TEST raid_rebuild_test 00:28:39.044 ************************************ 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87994 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87994 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 87994 ']' 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:39.044 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.045 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:39.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.045 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.045 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:39.045 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.303 [2024-10-28 13:39:53.258823] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:28:39.303 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:39.303 Zero copy mechanism will not be used. 00:28:39.303 [2024-10-28 13:39:53.259047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87994 ] 00:28:39.303 [2024-10-28 13:39:53.416520] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:39.303 [2024-10-28 13:39:53.444691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.561 [2024-10-28 13:39:53.500221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.561 [2024-10-28 13:39:53.563875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:39.561 [2024-10-28 13:39:53.563959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.561 BaseBdev1_malloc 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.561 [2024-10-28 13:39:53.656644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:39.561 [2024-10-28 13:39:53.656747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:39.561 [2024-10-28 13:39:53.656785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:39.561 [2024-10-28 13:39:53.656806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:39.561 [2024-10-28 13:39:53.660109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:39.561 [2024-10-28 13:39:53.660194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:39.561 BaseBdev1 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.561 BaseBdev2_malloc 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.561 [2024-10-28 13:39:53.690205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:39.561 [2024-10-28 13:39:53.690328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:39.561 [2024-10-28 13:39:53.690357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:39.561 [2024-10-28 13:39:53.690374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:39.561 [2024-10-28 13:39:53.693614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:39.561 [2024-10-28 13:39:53.693660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:39.561 BaseBdev2 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.561 spare_malloc 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.561 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.820 spare_delay 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.820 [2024-10-28 13:39:53.734952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:39.820 [2024-10-28 13:39:53.735056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:39.820 [2024-10-28 13:39:53.735100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:39.820 [2024-10-28 13:39:53.735120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:39.820 [2024-10-28 13:39:53.738432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:39.820 [2024-10-28 13:39:53.738489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:39.820 spare 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.820 [2024-10-28 13:39:53.747184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:39.820 [2024-10-28 13:39:53.750130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:39.820 [2024-10-28 13:39:53.750281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:28:39.820 [2024-10-28 13:39:53.750300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:39.820 [2024-10-28 13:39:53.750760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:39.820 [2024-10-28 13:39:53.750956] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:28:39.820 [2024-10-28 13:39:53.750988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:28:39.820 [2024-10-28 13:39:53.751213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:39.820 "name": "raid_bdev1", 00:28:39.820 "uuid": "fd27bde8-187e-4aa0-888f-554c52bc362f", 00:28:39.820 "strip_size_kb": 0, 00:28:39.820 "state": "online", 00:28:39.820 "raid_level": "raid1", 00:28:39.820 "superblock": false, 00:28:39.820 "num_base_bdevs": 2, 00:28:39.820 "num_base_bdevs_discovered": 2, 00:28:39.820 "num_base_bdevs_operational": 2, 00:28:39.820 "base_bdevs_list": [ 00:28:39.820 { 00:28:39.820 "name": "BaseBdev1", 00:28:39.820 "uuid": "7e8281c3-bb74-5e89-a651-fe18d3b28ded", 00:28:39.820 "is_configured": true, 00:28:39.820 "data_offset": 0, 00:28:39.820 "data_size": 65536 00:28:39.820 }, 00:28:39.820 { 00:28:39.820 "name": "BaseBdev2", 00:28:39.820 "uuid": "6db53931-eca7-5b39-a119-98aa7004711b", 00:28:39.820 "is_configured": true, 00:28:39.820 "data_offset": 0, 00:28:39.820 "data_size": 65536 00:28:39.820 } 00:28:39.820 ] 00:28:39.820 }' 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:39.820 13:39:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.388 [2024-10-28 13:39:54.315784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:40.388 13:39:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:40.647 [2024-10-28 13:39:54.739649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:40.647 /dev/nbd0 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:40.647 1+0 records in 00:28:40.647 1+0 records out 00:28:40.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345635 s, 11.9 MB/s 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:28:40.647 13:39:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:28:47.231 65536+0 records in 00:28:47.231 65536+0 records out 00:28:47.231 33554432 bytes (34 MB, 32 MiB) copied, 6.56524 s, 5.1 MB/s 00:28:47.231 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:47.231 13:40:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:47.231 13:40:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:47.231 13:40:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:47.231 13:40:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:47.231 13:40:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:47.231 13:40:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:47.796 [2024-10-28 13:40:01.698463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.796 [2024-10-28 13:40:01.738605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:47.796 "name": "raid_bdev1", 00:28:47.796 "uuid": "fd27bde8-187e-4aa0-888f-554c52bc362f", 00:28:47.796 "strip_size_kb": 0, 00:28:47.796 "state": "online", 00:28:47.796 "raid_level": "raid1", 00:28:47.796 "superblock": false, 00:28:47.796 "num_base_bdevs": 2, 00:28:47.796 "num_base_bdevs_discovered": 1, 00:28:47.796 "num_base_bdevs_operational": 1, 00:28:47.796 "base_bdevs_list": [ 00:28:47.796 { 00:28:47.796 "name": null, 00:28:47.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:47.796 "is_configured": false, 00:28:47.796 "data_offset": 0, 00:28:47.796 "data_size": 65536 00:28:47.796 }, 00:28:47.796 { 00:28:47.796 "name": "BaseBdev2", 00:28:47.796 "uuid": "6db53931-eca7-5b39-a119-98aa7004711b", 00:28:47.796 "is_configured": true, 00:28:47.796 "data_offset": 0, 00:28:47.796 "data_size": 65536 00:28:47.796 } 00:28:47.796 ] 00:28:47.796 }' 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:47.796 13:40:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:48.054 13:40:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:48.054 13:40:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.054 13:40:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:48.312 [2024-10-28 13:40:02.214649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:48.312 [2024-10-28 13:40:02.236837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09fe0 00:28:48.312 13:40:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.312 13:40:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:48.312 [2024-10-28 13:40:02.240568] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:49.246 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:49.246 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:49.246 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:49.246 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:49.246 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:49.246 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:49.246 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:49.246 13:40:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.246 13:40:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:49.246 13:40:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.246 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:49.246 "name": "raid_bdev1", 00:28:49.246 "uuid": "fd27bde8-187e-4aa0-888f-554c52bc362f", 00:28:49.246 "strip_size_kb": 0, 00:28:49.246 "state": "online", 00:28:49.246 "raid_level": "raid1", 00:28:49.246 "superblock": false, 00:28:49.246 "num_base_bdevs": 2, 00:28:49.246 "num_base_bdevs_discovered": 2, 00:28:49.246 "num_base_bdevs_operational": 2, 00:28:49.246 "process": { 00:28:49.246 "type": "rebuild", 00:28:49.246 "target": "spare", 00:28:49.246 "progress": { 00:28:49.246 "blocks": 20480, 00:28:49.246 "percent": 31 00:28:49.246 } 00:28:49.246 }, 00:28:49.246 "base_bdevs_list": [ 00:28:49.246 { 00:28:49.246 "name": "spare", 00:28:49.246 "uuid": "40459551-bf94-5f69-b6e3-99f4a6d310c2", 00:28:49.246 "is_configured": true, 00:28:49.246 "data_offset": 0, 00:28:49.246 "data_size": 65536 00:28:49.246 }, 00:28:49.246 { 00:28:49.246 "name": "BaseBdev2", 00:28:49.246 "uuid": "6db53931-eca7-5b39-a119-98aa7004711b", 00:28:49.246 "is_configured": true, 00:28:49.246 "data_offset": 0, 00:28:49.246 "data_size": 65536 00:28:49.246 } 00:28:49.246 ] 00:28:49.246 }' 00:28:49.246 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:49.246 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:49.246 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:49.504 [2024-10-28 13:40:03.430279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:49.504 [2024-10-28 13:40:03.450308] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:49.504 [2024-10-28 13:40:03.450431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:49.504 [2024-10-28 13:40:03.450457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:49.504 [2024-10-28 13:40:03.450483] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.504 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:49.504 "name": "raid_bdev1", 00:28:49.504 "uuid": "fd27bde8-187e-4aa0-888f-554c52bc362f", 00:28:49.504 "strip_size_kb": 0, 00:28:49.504 "state": "online", 00:28:49.504 "raid_level": "raid1", 00:28:49.504 "superblock": false, 00:28:49.504 "num_base_bdevs": 2, 00:28:49.504 "num_base_bdevs_discovered": 1, 00:28:49.504 "num_base_bdevs_operational": 1, 00:28:49.504 "base_bdevs_list": [ 00:28:49.504 { 00:28:49.504 "name": null, 00:28:49.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:49.505 "is_configured": false, 00:28:49.505 "data_offset": 0, 00:28:49.505 "data_size": 65536 00:28:49.505 }, 00:28:49.505 { 00:28:49.505 "name": "BaseBdev2", 00:28:49.505 "uuid": "6db53931-eca7-5b39-a119-98aa7004711b", 00:28:49.505 "is_configured": true, 00:28:49.505 "data_offset": 0, 00:28:49.505 "data_size": 65536 00:28:49.505 } 00:28:49.505 ] 00:28:49.505 }' 00:28:49.505 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:49.505 13:40:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:50.071 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:50.071 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:50.071 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:50.071 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:50.071 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:50.071 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:50.071 13:40:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.071 13:40:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.071 13:40:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:50.071 13:40:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.071 13:40:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:50.071 "name": "raid_bdev1", 00:28:50.071 "uuid": "fd27bde8-187e-4aa0-888f-554c52bc362f", 00:28:50.071 "strip_size_kb": 0, 00:28:50.071 "state": "online", 00:28:50.071 "raid_level": "raid1", 00:28:50.071 "superblock": false, 00:28:50.071 "num_base_bdevs": 2, 00:28:50.071 "num_base_bdevs_discovered": 1, 00:28:50.071 "num_base_bdevs_operational": 1, 00:28:50.071 "base_bdevs_list": [ 00:28:50.071 { 00:28:50.071 "name": null, 00:28:50.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.071 "is_configured": false, 00:28:50.071 "data_offset": 0, 00:28:50.071 "data_size": 65536 00:28:50.071 }, 00:28:50.071 { 00:28:50.071 "name": "BaseBdev2", 00:28:50.071 "uuid": "6db53931-eca7-5b39-a119-98aa7004711b", 00:28:50.071 "is_configured": true, 00:28:50.071 "data_offset": 0, 00:28:50.071 "data_size": 65536 00:28:50.071 } 00:28:50.071 ] 00:28:50.071 }' 00:28:50.071 13:40:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:50.071 13:40:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:50.071 13:40:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:50.071 13:40:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:50.071 13:40:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:50.071 13:40:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.071 13:40:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:50.071 [2024-10-28 13:40:04.165572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:50.071 [2024-10-28 13:40:04.172192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a0b0 00:28:50.071 13:40:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.071 13:40:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:50.071 [2024-10-28 13:40:04.174776] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:51.443 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:51.444 "name": "raid_bdev1", 00:28:51.444 "uuid": "fd27bde8-187e-4aa0-888f-554c52bc362f", 00:28:51.444 "strip_size_kb": 0, 00:28:51.444 "state": "online", 00:28:51.444 "raid_level": "raid1", 00:28:51.444 "superblock": false, 00:28:51.444 "num_base_bdevs": 2, 00:28:51.444 "num_base_bdevs_discovered": 2, 00:28:51.444 "num_base_bdevs_operational": 2, 00:28:51.444 "process": { 00:28:51.444 "type": "rebuild", 00:28:51.444 "target": "spare", 00:28:51.444 "progress": { 00:28:51.444 "blocks": 20480, 00:28:51.444 "percent": 31 00:28:51.444 } 00:28:51.444 }, 00:28:51.444 "base_bdevs_list": [ 00:28:51.444 { 00:28:51.444 "name": "spare", 00:28:51.444 "uuid": "40459551-bf94-5f69-b6e3-99f4a6d310c2", 00:28:51.444 "is_configured": true, 00:28:51.444 "data_offset": 0, 00:28:51.444 "data_size": 65536 00:28:51.444 }, 00:28:51.444 { 00:28:51.444 "name": "BaseBdev2", 00:28:51.444 "uuid": "6db53931-eca7-5b39-a119-98aa7004711b", 00:28:51.444 "is_configured": true, 00:28:51.444 "data_offset": 0, 00:28:51.444 "data_size": 65536 00:28:51.444 } 00:28:51.444 ] 00:28:51.444 }' 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=339 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:51.444 "name": "raid_bdev1", 00:28:51.444 "uuid": "fd27bde8-187e-4aa0-888f-554c52bc362f", 00:28:51.444 "strip_size_kb": 0, 00:28:51.444 "state": "online", 00:28:51.444 "raid_level": "raid1", 00:28:51.444 "superblock": false, 00:28:51.444 "num_base_bdevs": 2, 00:28:51.444 "num_base_bdevs_discovered": 2, 00:28:51.444 "num_base_bdevs_operational": 2, 00:28:51.444 "process": { 00:28:51.444 "type": "rebuild", 00:28:51.444 "target": "spare", 00:28:51.444 "progress": { 00:28:51.444 "blocks": 22528, 00:28:51.444 "percent": 34 00:28:51.444 } 00:28:51.444 }, 00:28:51.444 "base_bdevs_list": [ 00:28:51.444 { 00:28:51.444 "name": "spare", 00:28:51.444 "uuid": "40459551-bf94-5f69-b6e3-99f4a6d310c2", 00:28:51.444 "is_configured": true, 00:28:51.444 "data_offset": 0, 00:28:51.444 "data_size": 65536 00:28:51.444 }, 00:28:51.444 { 00:28:51.444 "name": "BaseBdev2", 00:28:51.444 "uuid": "6db53931-eca7-5b39-a119-98aa7004711b", 00:28:51.444 "is_configured": true, 00:28:51.444 "data_offset": 0, 00:28:51.444 "data_size": 65536 00:28:51.444 } 00:28:51.444 ] 00:28:51.444 }' 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:51.444 13:40:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:52.438 13:40:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:52.438 13:40:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:52.438 13:40:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:52.438 13:40:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:52.438 13:40:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:52.438 13:40:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:52.438 13:40:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:52.439 13:40:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.439 13:40:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.439 13:40:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:52.439 13:40:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.439 13:40:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:52.439 "name": "raid_bdev1", 00:28:52.439 "uuid": "fd27bde8-187e-4aa0-888f-554c52bc362f", 00:28:52.439 "strip_size_kb": 0, 00:28:52.439 "state": "online", 00:28:52.439 "raid_level": "raid1", 00:28:52.439 "superblock": false, 00:28:52.439 "num_base_bdevs": 2, 00:28:52.439 "num_base_bdevs_discovered": 2, 00:28:52.439 "num_base_bdevs_operational": 2, 00:28:52.439 "process": { 00:28:52.439 "type": "rebuild", 00:28:52.439 "target": "spare", 00:28:52.439 "progress": { 00:28:52.439 "blocks": 47104, 00:28:52.439 "percent": 71 00:28:52.439 } 00:28:52.439 }, 00:28:52.439 "base_bdevs_list": [ 00:28:52.439 { 00:28:52.439 "name": "spare", 00:28:52.439 "uuid": "40459551-bf94-5f69-b6e3-99f4a6d310c2", 00:28:52.439 "is_configured": true, 00:28:52.439 "data_offset": 0, 00:28:52.439 "data_size": 65536 00:28:52.439 }, 00:28:52.439 { 00:28:52.439 "name": "BaseBdev2", 00:28:52.439 "uuid": "6db53931-eca7-5b39-a119-98aa7004711b", 00:28:52.439 "is_configured": true, 00:28:52.439 "data_offset": 0, 00:28:52.439 "data_size": 65536 00:28:52.439 } 00:28:52.439 ] 00:28:52.439 }' 00:28:52.439 13:40:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:52.697 13:40:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:52.697 13:40:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:52.697 13:40:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:52.697 13:40:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:53.262 [2024-10-28 13:40:07.398838] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:53.262 [2024-10-28 13:40:07.398977] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:53.262 [2024-10-28 13:40:07.399048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:53.829 "name": "raid_bdev1", 00:28:53.829 "uuid": "fd27bde8-187e-4aa0-888f-554c52bc362f", 00:28:53.829 "strip_size_kb": 0, 00:28:53.829 "state": "online", 00:28:53.829 "raid_level": "raid1", 00:28:53.829 "superblock": false, 00:28:53.829 "num_base_bdevs": 2, 00:28:53.829 "num_base_bdevs_discovered": 2, 00:28:53.829 "num_base_bdevs_operational": 2, 00:28:53.829 "base_bdevs_list": [ 00:28:53.829 { 00:28:53.829 "name": "spare", 00:28:53.829 "uuid": "40459551-bf94-5f69-b6e3-99f4a6d310c2", 00:28:53.829 "is_configured": true, 00:28:53.829 "data_offset": 0, 00:28:53.829 "data_size": 65536 00:28:53.829 }, 00:28:53.829 { 00:28:53.829 "name": "BaseBdev2", 00:28:53.829 "uuid": "6db53931-eca7-5b39-a119-98aa7004711b", 00:28:53.829 "is_configured": true, 00:28:53.829 "data_offset": 0, 00:28:53.829 "data_size": 65536 00:28:53.829 } 00:28:53.829 ] 00:28:53.829 }' 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:53.829 "name": "raid_bdev1", 00:28:53.829 "uuid": "fd27bde8-187e-4aa0-888f-554c52bc362f", 00:28:53.829 "strip_size_kb": 0, 00:28:53.829 "state": "online", 00:28:53.829 "raid_level": "raid1", 00:28:53.829 "superblock": false, 00:28:53.829 "num_base_bdevs": 2, 00:28:53.829 "num_base_bdevs_discovered": 2, 00:28:53.829 "num_base_bdevs_operational": 2, 00:28:53.829 "base_bdevs_list": [ 00:28:53.829 { 00:28:53.829 "name": "spare", 00:28:53.829 "uuid": "40459551-bf94-5f69-b6e3-99f4a6d310c2", 00:28:53.829 "is_configured": true, 00:28:53.829 "data_offset": 0, 00:28:53.829 "data_size": 65536 00:28:53.829 }, 00:28:53.829 { 00:28:53.829 "name": "BaseBdev2", 00:28:53.829 "uuid": "6db53931-eca7-5b39-a119-98aa7004711b", 00:28:53.829 "is_configured": true, 00:28:53.829 "data_offset": 0, 00:28:53.829 "data_size": 65536 00:28:53.829 } 00:28:53.829 ] 00:28:53.829 }' 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:53.829 13:40:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:54.088 "name": "raid_bdev1", 00:28:54.088 "uuid": "fd27bde8-187e-4aa0-888f-554c52bc362f", 00:28:54.088 "strip_size_kb": 0, 00:28:54.088 "state": "online", 00:28:54.088 "raid_level": "raid1", 00:28:54.088 "superblock": false, 00:28:54.088 "num_base_bdevs": 2, 00:28:54.088 "num_base_bdevs_discovered": 2, 00:28:54.088 "num_base_bdevs_operational": 2, 00:28:54.088 "base_bdevs_list": [ 00:28:54.088 { 00:28:54.088 "name": "spare", 00:28:54.088 "uuid": "40459551-bf94-5f69-b6e3-99f4a6d310c2", 00:28:54.088 "is_configured": true, 00:28:54.088 "data_offset": 0, 00:28:54.088 "data_size": 65536 00:28:54.088 }, 00:28:54.088 { 00:28:54.088 "name": "BaseBdev2", 00:28:54.088 "uuid": "6db53931-eca7-5b39-a119-98aa7004711b", 00:28:54.088 "is_configured": true, 00:28:54.088 "data_offset": 0, 00:28:54.088 "data_size": 65536 00:28:54.088 } 00:28:54.088 ] 00:28:54.088 }' 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:54.088 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:54.654 [2024-10-28 13:40:08.549832] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:54.654 [2024-10-28 13:40:08.549873] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:54.654 [2024-10-28 13:40:08.549998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:54.654 [2024-10-28 13:40:08.550105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:54.654 [2024-10-28 13:40:08.550123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:54.654 13:40:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:54.655 13:40:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:54.655 13:40:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:54.655 13:40:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:54.655 13:40:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:54.655 13:40:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:54.913 /dev/nbd0 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:54.913 1+0 records in 00:28:54.913 1+0 records out 00:28:54.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252839 s, 16.2 MB/s 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:54.913 13:40:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:28:55.171 /dev/nbd1 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:55.171 1+0 records in 00:28:55.171 1+0 records out 00:28:55.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468911 s, 8.7 MB/s 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:55.171 13:40:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:55.172 13:40:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:28:55.172 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:55.172 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:55.172 13:40:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:55.430 13:40:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:28:55.430 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:55.430 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:55.430 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:55.430 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:55.430 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:55.430 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:55.689 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:55.689 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:55.689 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:55.689 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:55.689 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:55.689 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:55.689 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:55.689 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:55.689 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:55.689 13:40:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87994 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 87994 ']' 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 87994 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87994 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:55.949 killing process with pid 87994 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87994' 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 87994 00:28:55.949 Received shutdown signal, test time was about 60.000000 seconds 00:28:55.949 00:28:55.949 Latency(us) 00:28:55.949 [2024-10-28T13:40:10.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.949 [2024-10-28T13:40:10.109Z] =================================================================================================================== 00:28:55.949 [2024-10-28T13:40:10.109Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:55.949 [2024-10-28 13:40:10.054573] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:55.949 13:40:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 87994 00:28:55.949 [2024-10-28 13:40:10.086689] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:56.208 13:40:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:28:56.208 00:28:56.208 real 0m17.187s 00:28:56.208 user 0m19.772s 00:28:56.208 sys 0m3.725s 00:28:56.208 13:40:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:56.208 13:40:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:56.208 ************************************ 00:28:56.208 END TEST raid_rebuild_test 00:28:56.208 ************************************ 00:28:56.468 13:40:10 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:28:56.468 13:40:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:28:56.468 13:40:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:56.468 13:40:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:56.468 ************************************ 00:28:56.468 START TEST raid_rebuild_test_sb 00:28:56.468 ************************************ 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88428 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88428 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88428 ']' 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:56.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:56.468 13:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.468 [2024-10-28 13:40:10.498308] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:28:56.468 [2024-10-28 13:40:10.498524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88428 ] 00:28:56.468 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:56.468 Zero copy mechanism will not be used. 00:28:56.727 [2024-10-28 13:40:10.651653] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:56.727 [2024-10-28 13:40:10.682824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.727 [2024-10-28 13:40:10.733538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.727 [2024-10-28 13:40:10.792998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:56.727 [2024-10-28 13:40:10.793049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.663 BaseBdev1_malloc 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.663 [2024-10-28 13:40:11.531833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:57.663 [2024-10-28 13:40:11.531918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:57.663 [2024-10-28 13:40:11.531956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:57.663 [2024-10-28 13:40:11.531988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:57.663 [2024-10-28 13:40:11.534894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:57.663 [2024-10-28 13:40:11.534946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:57.663 BaseBdev1 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.663 BaseBdev2_malloc 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.663 [2024-10-28 13:40:11.555667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:57.663 [2024-10-28 13:40:11.555743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:57.663 [2024-10-28 13:40:11.555771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:57.663 [2024-10-28 13:40:11.555789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:57.663 [2024-10-28 13:40:11.558606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:57.663 [2024-10-28 13:40:11.558658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:57.663 BaseBdev2 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.663 spare_malloc 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.663 spare_delay 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.663 [2024-10-28 13:40:11.587250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:57.663 [2024-10-28 13:40:11.587327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:57.663 [2024-10-28 13:40:11.587359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:28:57.663 [2024-10-28 13:40:11.587381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:57.663 [2024-10-28 13:40:11.590308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:57.663 [2024-10-28 13:40:11.590362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:57.663 spare 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.663 [2024-10-28 13:40:11.595336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:57.663 [2024-10-28 13:40:11.597825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:57.663 [2024-10-28 13:40:11.598038] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:28:57.663 [2024-10-28 13:40:11.598060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:57.663 [2024-10-28 13:40:11.598433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:57.663 [2024-10-28 13:40:11.598662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:28:57.663 [2024-10-28 13:40:11.598684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:28:57.663 [2024-10-28 13:40:11.598877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:57.663 "name": "raid_bdev1", 00:28:57.663 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:28:57.663 "strip_size_kb": 0, 00:28:57.663 "state": "online", 00:28:57.663 "raid_level": "raid1", 00:28:57.663 "superblock": true, 00:28:57.663 "num_base_bdevs": 2, 00:28:57.663 "num_base_bdevs_discovered": 2, 00:28:57.663 "num_base_bdevs_operational": 2, 00:28:57.663 "base_bdevs_list": [ 00:28:57.663 { 00:28:57.663 "name": "BaseBdev1", 00:28:57.663 "uuid": "0afb4e15-97a7-5f0e-be4d-8c3b0054008d", 00:28:57.663 "is_configured": true, 00:28:57.663 "data_offset": 2048, 00:28:57.663 "data_size": 63488 00:28:57.663 }, 00:28:57.663 { 00:28:57.663 "name": "BaseBdev2", 00:28:57.663 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:28:57.663 "is_configured": true, 00:28:57.663 "data_offset": 2048, 00:28:57.663 "data_size": 63488 00:28:57.663 } 00:28:57.663 ] 00:28:57.663 }' 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:57.663 13:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:58.230 [2024-10-28 13:40:12.119842] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:58.230 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:58.489 [2024-10-28 13:40:12.559630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:58.489 /dev/nbd0 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:58.489 1+0 records in 00:28:58.489 1+0 records out 00:28:58.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454096 s, 9.0 MB/s 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:28:58.489 13:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:29:05.131 63488+0 records in 00:29:05.131 63488+0 records out 00:29:05.131 32505856 bytes (33 MB, 31 MiB) copied, 5.8691 s, 5.5 MB/s 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:05.131 [2024-10-28 13:40:18.773812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.131 13:40:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.131 [2024-10-28 13:40:18.809905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:05.132 "name": "raid_bdev1", 00:29:05.132 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:05.132 "strip_size_kb": 0, 00:29:05.132 "state": "online", 00:29:05.132 "raid_level": "raid1", 00:29:05.132 "superblock": true, 00:29:05.132 "num_base_bdevs": 2, 00:29:05.132 "num_base_bdevs_discovered": 1, 00:29:05.132 "num_base_bdevs_operational": 1, 00:29:05.132 "base_bdevs_list": [ 00:29:05.132 { 00:29:05.132 "name": null, 00:29:05.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.132 "is_configured": false, 00:29:05.132 "data_offset": 0, 00:29:05.132 "data_size": 63488 00:29:05.132 }, 00:29:05.132 { 00:29:05.132 "name": "BaseBdev2", 00:29:05.132 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:05.132 "is_configured": true, 00:29:05.132 "data_offset": 2048, 00:29:05.132 "data_size": 63488 00:29:05.132 } 00:29:05.132 ] 00:29:05.132 }' 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:05.132 13:40:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.391 13:40:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:05.391 13:40:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.391 13:40:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:05.391 [2024-10-28 13:40:19.338099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:05.391 [2024-10-28 13:40:19.363743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3770 00:29:05.391 13:40:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.391 13:40:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:29:05.391 [2024-10-28 13:40:19.367954] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:06.358 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:06.358 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:06.358 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:06.358 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:06.358 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:06.358 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:06.358 13:40:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.358 13:40:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:06.358 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.358 13:40:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.358 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:06.358 "name": "raid_bdev1", 00:29:06.358 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:06.359 "strip_size_kb": 0, 00:29:06.359 "state": "online", 00:29:06.359 "raid_level": "raid1", 00:29:06.359 "superblock": true, 00:29:06.359 "num_base_bdevs": 2, 00:29:06.359 "num_base_bdevs_discovered": 2, 00:29:06.359 "num_base_bdevs_operational": 2, 00:29:06.359 "process": { 00:29:06.359 "type": "rebuild", 00:29:06.359 "target": "spare", 00:29:06.359 "progress": { 00:29:06.359 "blocks": 20480, 00:29:06.359 "percent": 32 00:29:06.359 } 00:29:06.359 }, 00:29:06.359 "base_bdevs_list": [ 00:29:06.359 { 00:29:06.359 "name": "spare", 00:29:06.359 "uuid": "af73e9e6-0fef-5771-9a0d-3824e1957f98", 00:29:06.359 "is_configured": true, 00:29:06.359 "data_offset": 2048, 00:29:06.359 "data_size": 63488 00:29:06.359 }, 00:29:06.359 { 00:29:06.359 "name": "BaseBdev2", 00:29:06.359 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:06.359 "is_configured": true, 00:29:06.359 "data_offset": 2048, 00:29:06.359 "data_size": 63488 00:29:06.359 } 00:29:06.359 ] 00:29:06.359 }' 00:29:06.359 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:06.359 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:06.359 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:06.618 [2024-10-28 13:40:20.541809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:06.618 [2024-10-28 13:40:20.577286] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:06.618 [2024-10-28 13:40:20.577386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:06.618 [2024-10-28 13:40:20.577409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:06.618 [2024-10-28 13:40:20.577422] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.618 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:06.618 "name": "raid_bdev1", 00:29:06.618 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:06.618 "strip_size_kb": 0, 00:29:06.618 "state": "online", 00:29:06.618 "raid_level": "raid1", 00:29:06.618 "superblock": true, 00:29:06.618 "num_base_bdevs": 2, 00:29:06.618 "num_base_bdevs_discovered": 1, 00:29:06.618 "num_base_bdevs_operational": 1, 00:29:06.618 "base_bdevs_list": [ 00:29:06.618 { 00:29:06.618 "name": null, 00:29:06.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.618 "is_configured": false, 00:29:06.618 "data_offset": 0, 00:29:06.618 "data_size": 63488 00:29:06.618 }, 00:29:06.618 { 00:29:06.618 "name": "BaseBdev2", 00:29:06.619 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:06.619 "is_configured": true, 00:29:06.619 "data_offset": 2048, 00:29:06.619 "data_size": 63488 00:29:06.619 } 00:29:06.619 ] 00:29:06.619 }' 00:29:06.619 13:40:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:06.619 13:40:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:07.186 "name": "raid_bdev1", 00:29:07.186 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:07.186 "strip_size_kb": 0, 00:29:07.186 "state": "online", 00:29:07.186 "raid_level": "raid1", 00:29:07.186 "superblock": true, 00:29:07.186 "num_base_bdevs": 2, 00:29:07.186 "num_base_bdevs_discovered": 1, 00:29:07.186 "num_base_bdevs_operational": 1, 00:29:07.186 "base_bdevs_list": [ 00:29:07.186 { 00:29:07.186 "name": null, 00:29:07.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.186 "is_configured": false, 00:29:07.186 "data_offset": 0, 00:29:07.186 "data_size": 63488 00:29:07.186 }, 00:29:07.186 { 00:29:07.186 "name": "BaseBdev2", 00:29:07.186 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:07.186 "is_configured": true, 00:29:07.186 "data_offset": 2048, 00:29:07.186 "data_size": 63488 00:29:07.186 } 00:29:07.186 ] 00:29:07.186 }' 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.186 [2024-10-28 13:40:21.284228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:07.186 [2024-10-28 13:40:21.291556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3840 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.186 13:40:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:29:07.186 [2024-10-28 13:40:21.294321] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:08.563 "name": "raid_bdev1", 00:29:08.563 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:08.563 "strip_size_kb": 0, 00:29:08.563 "state": "online", 00:29:08.563 "raid_level": "raid1", 00:29:08.563 "superblock": true, 00:29:08.563 "num_base_bdevs": 2, 00:29:08.563 "num_base_bdevs_discovered": 2, 00:29:08.563 "num_base_bdevs_operational": 2, 00:29:08.563 "process": { 00:29:08.563 "type": "rebuild", 00:29:08.563 "target": "spare", 00:29:08.563 "progress": { 00:29:08.563 "blocks": 20480, 00:29:08.563 "percent": 32 00:29:08.563 } 00:29:08.563 }, 00:29:08.563 "base_bdevs_list": [ 00:29:08.563 { 00:29:08.563 "name": "spare", 00:29:08.563 "uuid": "af73e9e6-0fef-5771-9a0d-3824e1957f98", 00:29:08.563 "is_configured": true, 00:29:08.563 "data_offset": 2048, 00:29:08.563 "data_size": 63488 00:29:08.563 }, 00:29:08.563 { 00:29:08.563 "name": "BaseBdev2", 00:29:08.563 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:08.563 "is_configured": true, 00:29:08.563 "data_offset": 2048, 00:29:08.563 "data_size": 63488 00:29:08.563 } 00:29:08.563 ] 00:29:08.563 }' 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:29:08.563 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=356 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:08.563 "name": "raid_bdev1", 00:29:08.563 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:08.563 "strip_size_kb": 0, 00:29:08.563 "state": "online", 00:29:08.563 "raid_level": "raid1", 00:29:08.563 "superblock": true, 00:29:08.563 "num_base_bdevs": 2, 00:29:08.563 "num_base_bdevs_discovered": 2, 00:29:08.563 "num_base_bdevs_operational": 2, 00:29:08.563 "process": { 00:29:08.563 "type": "rebuild", 00:29:08.563 "target": "spare", 00:29:08.563 "progress": { 00:29:08.563 "blocks": 22528, 00:29:08.563 "percent": 35 00:29:08.563 } 00:29:08.563 }, 00:29:08.563 "base_bdevs_list": [ 00:29:08.563 { 00:29:08.563 "name": "spare", 00:29:08.563 "uuid": "af73e9e6-0fef-5771-9a0d-3824e1957f98", 00:29:08.563 "is_configured": true, 00:29:08.563 "data_offset": 2048, 00:29:08.563 "data_size": 63488 00:29:08.563 }, 00:29:08.563 { 00:29:08.563 "name": "BaseBdev2", 00:29:08.563 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:08.563 "is_configured": true, 00:29:08.563 "data_offset": 2048, 00:29:08.563 "data_size": 63488 00:29:08.563 } 00:29:08.563 ] 00:29:08.563 }' 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:08.563 13:40:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:09.499 13:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:09.499 13:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:09.499 13:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:09.499 13:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:09.499 13:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:09.499 13:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:09.499 13:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:09.499 13:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:09.499 13:40:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.499 13:40:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:09.499 13:40:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.758 13:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:09.758 "name": "raid_bdev1", 00:29:09.758 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:09.758 "strip_size_kb": 0, 00:29:09.758 "state": "online", 00:29:09.758 "raid_level": "raid1", 00:29:09.758 "superblock": true, 00:29:09.758 "num_base_bdevs": 2, 00:29:09.758 "num_base_bdevs_discovered": 2, 00:29:09.758 "num_base_bdevs_operational": 2, 00:29:09.758 "process": { 00:29:09.758 "type": "rebuild", 00:29:09.758 "target": "spare", 00:29:09.758 "progress": { 00:29:09.758 "blocks": 47104, 00:29:09.758 "percent": 74 00:29:09.758 } 00:29:09.758 }, 00:29:09.758 "base_bdevs_list": [ 00:29:09.758 { 00:29:09.758 "name": "spare", 00:29:09.758 "uuid": "af73e9e6-0fef-5771-9a0d-3824e1957f98", 00:29:09.758 "is_configured": true, 00:29:09.758 "data_offset": 2048, 00:29:09.758 "data_size": 63488 00:29:09.758 }, 00:29:09.758 { 00:29:09.758 "name": "BaseBdev2", 00:29:09.758 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:09.758 "is_configured": true, 00:29:09.758 "data_offset": 2048, 00:29:09.758 "data_size": 63488 00:29:09.758 } 00:29:09.758 ] 00:29:09.758 }' 00:29:09.758 13:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:09.758 13:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:09.758 13:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:09.758 13:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:09.758 13:40:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:10.326 [2024-10-28 13:40:24.417102] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:10.326 [2024-10-28 13:40:24.417463] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:10.326 [2024-10-28 13:40:24.417645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:10.893 "name": "raid_bdev1", 00:29:10.893 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:10.893 "strip_size_kb": 0, 00:29:10.893 "state": "online", 00:29:10.893 "raid_level": "raid1", 00:29:10.893 "superblock": true, 00:29:10.893 "num_base_bdevs": 2, 00:29:10.893 "num_base_bdevs_discovered": 2, 00:29:10.893 "num_base_bdevs_operational": 2, 00:29:10.893 "base_bdevs_list": [ 00:29:10.893 { 00:29:10.893 "name": "spare", 00:29:10.893 "uuid": "af73e9e6-0fef-5771-9a0d-3824e1957f98", 00:29:10.893 "is_configured": true, 00:29:10.893 "data_offset": 2048, 00:29:10.893 "data_size": 63488 00:29:10.893 }, 00:29:10.893 { 00:29:10.893 "name": "BaseBdev2", 00:29:10.893 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:10.893 "is_configured": true, 00:29:10.893 "data_offset": 2048, 00:29:10.893 "data_size": 63488 00:29:10.893 } 00:29:10.893 ] 00:29:10.893 }' 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:10.893 13:40:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.893 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:10.893 "name": "raid_bdev1", 00:29:10.893 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:10.893 "strip_size_kb": 0, 00:29:10.893 "state": "online", 00:29:10.893 "raid_level": "raid1", 00:29:10.893 "superblock": true, 00:29:10.893 "num_base_bdevs": 2, 00:29:10.893 "num_base_bdevs_discovered": 2, 00:29:10.893 "num_base_bdevs_operational": 2, 00:29:10.893 "base_bdevs_list": [ 00:29:10.893 { 00:29:10.893 "name": "spare", 00:29:10.893 "uuid": "af73e9e6-0fef-5771-9a0d-3824e1957f98", 00:29:10.893 "is_configured": true, 00:29:10.893 "data_offset": 2048, 00:29:10.893 "data_size": 63488 00:29:10.893 }, 00:29:10.893 { 00:29:10.893 "name": "BaseBdev2", 00:29:10.893 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:10.893 "is_configured": true, 00:29:10.893 "data_offset": 2048, 00:29:10.893 "data_size": 63488 00:29:10.893 } 00:29:10.893 ] 00:29:10.893 }' 00:29:10.893 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:11.152 "name": "raid_bdev1", 00:29:11.152 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:11.152 "strip_size_kb": 0, 00:29:11.152 "state": "online", 00:29:11.152 "raid_level": "raid1", 00:29:11.152 "superblock": true, 00:29:11.152 "num_base_bdevs": 2, 00:29:11.152 "num_base_bdevs_discovered": 2, 00:29:11.152 "num_base_bdevs_operational": 2, 00:29:11.152 "base_bdevs_list": [ 00:29:11.152 { 00:29:11.152 "name": "spare", 00:29:11.152 "uuid": "af73e9e6-0fef-5771-9a0d-3824e1957f98", 00:29:11.152 "is_configured": true, 00:29:11.152 "data_offset": 2048, 00:29:11.152 "data_size": 63488 00:29:11.152 }, 00:29:11.152 { 00:29:11.152 "name": "BaseBdev2", 00:29:11.152 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:11.152 "is_configured": true, 00:29:11.152 "data_offset": 2048, 00:29:11.152 "data_size": 63488 00:29:11.152 } 00:29:11.152 ] 00:29:11.152 }' 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:11.152 13:40:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:11.717 [2024-10-28 13:40:25.668274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:11.717 [2024-10-28 13:40:25.668481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:11.717 [2024-10-28 13:40:25.668726] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:11.717 [2024-10-28 13:40:25.668957] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:11.717 [2024-10-28 13:40:25.668986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:11.717 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:11.718 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:11.718 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:11.718 13:40:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:11.975 /dev/nbd0 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:11.975 1+0 records in 00:29:11.975 1+0 records out 00:29:11.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366022 s, 11.2 MB/s 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:11.975 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:29:12.361 /dev/nbd1 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:12.361 1+0 records in 00:29:12.361 1+0 records out 00:29:12.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027425 s, 14.9 MB/s 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:12.361 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:12.633 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:29:12.633 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:12.633 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:12.633 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:12.633 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:12.633 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:12.633 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:12.892 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:12.892 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:12.892 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:12.892 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:12.892 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:12.892 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:12.892 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:12.892 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:12.892 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:12.892 13:40:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:13.150 [2024-10-28 13:40:27.193787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:13.150 [2024-10-28 13:40:27.193902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.150 [2024-10-28 13:40:27.193955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:13.150 [2024-10-28 13:40:27.193970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.150 [2024-10-28 13:40:27.197006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.150 [2024-10-28 13:40:27.197061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:13.150 [2024-10-28 13:40:27.197208] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:13.150 [2024-10-28 13:40:27.197263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:13.150 [2024-10-28 13:40:27.197422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:13.150 spare 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:13.150 [2024-10-28 13:40:27.297534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:13.150 [2024-10-28 13:40:27.297614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:13.150 [2024-10-28 13:40:27.297993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:29:13.150 [2024-10-28 13:40:27.298206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:13.150 [2024-10-28 13:40:27.298223] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:13.150 [2024-10-28 13:40:27.298400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:13.150 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:13.409 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.409 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.409 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:13.409 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.409 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:13.409 "name": "raid_bdev1", 00:29:13.409 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:13.409 "strip_size_kb": 0, 00:29:13.409 "state": "online", 00:29:13.409 "raid_level": "raid1", 00:29:13.409 "superblock": true, 00:29:13.409 "num_base_bdevs": 2, 00:29:13.409 "num_base_bdevs_discovered": 2, 00:29:13.409 "num_base_bdevs_operational": 2, 00:29:13.409 "base_bdevs_list": [ 00:29:13.409 { 00:29:13.409 "name": "spare", 00:29:13.409 "uuid": "af73e9e6-0fef-5771-9a0d-3824e1957f98", 00:29:13.409 "is_configured": true, 00:29:13.409 "data_offset": 2048, 00:29:13.409 "data_size": 63488 00:29:13.409 }, 00:29:13.409 { 00:29:13.409 "name": "BaseBdev2", 00:29:13.409 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:13.409 "is_configured": true, 00:29:13.409 "data_offset": 2048, 00:29:13.409 "data_size": 63488 00:29:13.409 } 00:29:13.409 ] 00:29:13.409 }' 00:29:13.409 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:13.409 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:13.975 "name": "raid_bdev1", 00:29:13.975 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:13.975 "strip_size_kb": 0, 00:29:13.975 "state": "online", 00:29:13.975 "raid_level": "raid1", 00:29:13.975 "superblock": true, 00:29:13.975 "num_base_bdevs": 2, 00:29:13.975 "num_base_bdevs_discovered": 2, 00:29:13.975 "num_base_bdevs_operational": 2, 00:29:13.975 "base_bdevs_list": [ 00:29:13.975 { 00:29:13.975 "name": "spare", 00:29:13.975 "uuid": "af73e9e6-0fef-5771-9a0d-3824e1957f98", 00:29:13.975 "is_configured": true, 00:29:13.975 "data_offset": 2048, 00:29:13.975 "data_size": 63488 00:29:13.975 }, 00:29:13.975 { 00:29:13.975 "name": "BaseBdev2", 00:29:13.975 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:13.975 "is_configured": true, 00:29:13.975 "data_offset": 2048, 00:29:13.975 "data_size": 63488 00:29:13.975 } 00:29:13.975 ] 00:29:13.975 }' 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:13.975 13:40:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:13.975 [2024-10-28 13:40:28.050670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:13.975 "name": "raid_bdev1", 00:29:13.975 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:13.975 "strip_size_kb": 0, 00:29:13.975 "state": "online", 00:29:13.975 "raid_level": "raid1", 00:29:13.975 "superblock": true, 00:29:13.975 "num_base_bdevs": 2, 00:29:13.975 "num_base_bdevs_discovered": 1, 00:29:13.975 "num_base_bdevs_operational": 1, 00:29:13.975 "base_bdevs_list": [ 00:29:13.975 { 00:29:13.975 "name": null, 00:29:13.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:13.975 "is_configured": false, 00:29:13.975 "data_offset": 0, 00:29:13.975 "data_size": 63488 00:29:13.975 }, 00:29:13.975 { 00:29:13.975 "name": "BaseBdev2", 00:29:13.975 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:13.975 "is_configured": true, 00:29:13.975 "data_offset": 2048, 00:29:13.975 "data_size": 63488 00:29:13.975 } 00:29:13.975 ] 00:29:13.975 }' 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:13.975 13:40:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:14.541 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:14.541 13:40:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.541 13:40:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:14.541 [2024-10-28 13:40:28.546913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:14.541 [2024-10-28 13:40:28.547214] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:14.541 [2024-10-28 13:40:28.547245] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:14.541 [2024-10-28 13:40:28.547292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:14.541 [2024-10-28 13:40:28.554184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1fc0 00:29:14.541 13:40:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.541 13:40:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:29:14.541 [2024-10-28 13:40:28.556743] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:15.476 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:15.476 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:15.476 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:15.476 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:15.476 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:15.476 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.476 13:40:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.476 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.476 13:40:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:15.476 13:40:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.476 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:15.476 "name": "raid_bdev1", 00:29:15.476 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:15.476 "strip_size_kb": 0, 00:29:15.476 "state": "online", 00:29:15.476 "raid_level": "raid1", 00:29:15.476 "superblock": true, 00:29:15.476 "num_base_bdevs": 2, 00:29:15.476 "num_base_bdevs_discovered": 2, 00:29:15.476 "num_base_bdevs_operational": 2, 00:29:15.476 "process": { 00:29:15.476 "type": "rebuild", 00:29:15.476 "target": "spare", 00:29:15.476 "progress": { 00:29:15.476 "blocks": 20480, 00:29:15.476 "percent": 32 00:29:15.476 } 00:29:15.476 }, 00:29:15.476 "base_bdevs_list": [ 00:29:15.476 { 00:29:15.476 "name": "spare", 00:29:15.476 "uuid": "af73e9e6-0fef-5771-9a0d-3824e1957f98", 00:29:15.476 "is_configured": true, 00:29:15.476 "data_offset": 2048, 00:29:15.476 "data_size": 63488 00:29:15.476 }, 00:29:15.476 { 00:29:15.476 "name": "BaseBdev2", 00:29:15.476 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:15.476 "is_configured": true, 00:29:15.476 "data_offset": 2048, 00:29:15.476 "data_size": 63488 00:29:15.476 } 00:29:15.476 ] 00:29:15.476 }' 00:29:15.476 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:15.735 [2024-10-28 13:40:29.718775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:15.735 [2024-10-28 13:40:29.765252] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:15.735 [2024-10-28 13:40:29.765355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:15.735 [2024-10-28 13:40:29.765377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:15.735 [2024-10-28 13:40:29.765390] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:15.735 "name": "raid_bdev1", 00:29:15.735 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:15.735 "strip_size_kb": 0, 00:29:15.735 "state": "online", 00:29:15.735 "raid_level": "raid1", 00:29:15.735 "superblock": true, 00:29:15.735 "num_base_bdevs": 2, 00:29:15.735 "num_base_bdevs_discovered": 1, 00:29:15.735 "num_base_bdevs_operational": 1, 00:29:15.735 "base_bdevs_list": [ 00:29:15.735 { 00:29:15.735 "name": null, 00:29:15.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.735 "is_configured": false, 00:29:15.735 "data_offset": 0, 00:29:15.735 "data_size": 63488 00:29:15.735 }, 00:29:15.735 { 00:29:15.735 "name": "BaseBdev2", 00:29:15.735 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:15.735 "is_configured": true, 00:29:15.735 "data_offset": 2048, 00:29:15.735 "data_size": 63488 00:29:15.735 } 00:29:15.735 ] 00:29:15.735 }' 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:15.735 13:40:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:16.301 13:40:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:16.301 13:40:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.301 13:40:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:16.301 [2024-10-28 13:40:30.299539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:16.301 [2024-10-28 13:40:30.299620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.301 [2024-10-28 13:40:30.299651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:29:16.301 [2024-10-28 13:40:30.299670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.301 [2024-10-28 13:40:30.300252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.301 [2024-10-28 13:40:30.300300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:16.301 [2024-10-28 13:40:30.300412] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:16.301 [2024-10-28 13:40:30.300441] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:16.301 [2024-10-28 13:40:30.300455] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:16.301 [2024-10-28 13:40:30.300493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:16.301 [2024-10-28 13:40:30.307003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:29:16.301 spare 00:29:16.301 13:40:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.301 13:40:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:29:16.301 [2024-10-28 13:40:30.309555] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:17.234 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:17.234 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:17.234 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:17.234 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:17.234 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:17.234 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.234 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.234 13:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.234 13:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:17.234 13:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.234 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:17.234 "name": "raid_bdev1", 00:29:17.234 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:17.234 "strip_size_kb": 0, 00:29:17.234 "state": "online", 00:29:17.234 "raid_level": "raid1", 00:29:17.234 "superblock": true, 00:29:17.234 "num_base_bdevs": 2, 00:29:17.234 "num_base_bdevs_discovered": 2, 00:29:17.234 "num_base_bdevs_operational": 2, 00:29:17.234 "process": { 00:29:17.234 "type": "rebuild", 00:29:17.234 "target": "spare", 00:29:17.234 "progress": { 00:29:17.234 "blocks": 20480, 00:29:17.234 "percent": 32 00:29:17.234 } 00:29:17.234 }, 00:29:17.234 "base_bdevs_list": [ 00:29:17.234 { 00:29:17.234 "name": "spare", 00:29:17.234 "uuid": "af73e9e6-0fef-5771-9a0d-3824e1957f98", 00:29:17.234 "is_configured": true, 00:29:17.234 "data_offset": 2048, 00:29:17.234 "data_size": 63488 00:29:17.234 }, 00:29:17.234 { 00:29:17.234 "name": "BaseBdev2", 00:29:17.234 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:17.234 "is_configured": true, 00:29:17.234 "data_offset": 2048, 00:29:17.234 "data_size": 63488 00:29:17.234 } 00:29:17.234 ] 00:29:17.234 }' 00:29:17.234 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:17.492 [2024-10-28 13:40:31.479358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:17.492 [2024-10-28 13:40:31.517800] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:17.492 [2024-10-28 13:40:31.517874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:17.492 [2024-10-28 13:40:31.517902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:17.492 [2024-10-28 13:40:31.517914] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:17.492 "name": "raid_bdev1", 00:29:17.492 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:17.492 "strip_size_kb": 0, 00:29:17.492 "state": "online", 00:29:17.492 "raid_level": "raid1", 00:29:17.492 "superblock": true, 00:29:17.492 "num_base_bdevs": 2, 00:29:17.492 "num_base_bdevs_discovered": 1, 00:29:17.492 "num_base_bdevs_operational": 1, 00:29:17.492 "base_bdevs_list": [ 00:29:17.492 { 00:29:17.492 "name": null, 00:29:17.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.492 "is_configured": false, 00:29:17.492 "data_offset": 0, 00:29:17.492 "data_size": 63488 00:29:17.492 }, 00:29:17.492 { 00:29:17.492 "name": "BaseBdev2", 00:29:17.492 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:17.492 "is_configured": true, 00:29:17.492 "data_offset": 2048, 00:29:17.492 "data_size": 63488 00:29:17.492 } 00:29:17.492 ] 00:29:17.492 }' 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:17.492 13:40:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:18.058 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:18.058 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:18.058 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:18.058 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:18.058 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:18.058 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:18.058 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:18.058 13:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.058 13:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:18.058 13:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.058 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:18.058 "name": "raid_bdev1", 00:29:18.058 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:18.058 "strip_size_kb": 0, 00:29:18.058 "state": "online", 00:29:18.058 "raid_level": "raid1", 00:29:18.058 "superblock": true, 00:29:18.058 "num_base_bdevs": 2, 00:29:18.058 "num_base_bdevs_discovered": 1, 00:29:18.058 "num_base_bdevs_operational": 1, 00:29:18.058 "base_bdevs_list": [ 00:29:18.058 { 00:29:18.058 "name": null, 00:29:18.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:18.058 "is_configured": false, 00:29:18.058 "data_offset": 0, 00:29:18.058 "data_size": 63488 00:29:18.058 }, 00:29:18.058 { 00:29:18.058 "name": "BaseBdev2", 00:29:18.058 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:18.058 "is_configured": true, 00:29:18.058 "data_offset": 2048, 00:29:18.058 "data_size": 63488 00:29:18.058 } 00:29:18.058 ] 00:29:18.058 }' 00:29:18.058 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:18.058 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:18.058 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:18.316 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:18.316 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:29:18.316 13:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.316 13:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:18.316 13:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.316 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:18.316 13:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.316 13:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:18.316 [2024-10-28 13:40:32.236966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:18.316 [2024-10-28 13:40:32.237184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:18.316 [2024-10-28 13:40:32.237230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:29:18.316 [2024-10-28 13:40:32.237246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:18.316 [2024-10-28 13:40:32.237789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:18.316 [2024-10-28 13:40:32.237820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:18.316 [2024-10-28 13:40:32.237925] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:18.316 [2024-10-28 13:40:32.237945] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:18.316 [2024-10-28 13:40:32.237968] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:18.316 [2024-10-28 13:40:32.237984] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:29:18.316 BaseBdev1 00:29:18.316 13:40:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.316 13:40:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:19.252 "name": "raid_bdev1", 00:29:19.252 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:19.252 "strip_size_kb": 0, 00:29:19.252 "state": "online", 00:29:19.252 "raid_level": "raid1", 00:29:19.252 "superblock": true, 00:29:19.252 "num_base_bdevs": 2, 00:29:19.252 "num_base_bdevs_discovered": 1, 00:29:19.252 "num_base_bdevs_operational": 1, 00:29:19.252 "base_bdevs_list": [ 00:29:19.252 { 00:29:19.252 "name": null, 00:29:19.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:19.252 "is_configured": false, 00:29:19.252 "data_offset": 0, 00:29:19.252 "data_size": 63488 00:29:19.252 }, 00:29:19.252 { 00:29:19.252 "name": "BaseBdev2", 00:29:19.252 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:19.252 "is_configured": true, 00:29:19.252 "data_offset": 2048, 00:29:19.252 "data_size": 63488 00:29:19.252 } 00:29:19.252 ] 00:29:19.252 }' 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:19.252 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:19.819 "name": "raid_bdev1", 00:29:19.819 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:19.819 "strip_size_kb": 0, 00:29:19.819 "state": "online", 00:29:19.819 "raid_level": "raid1", 00:29:19.819 "superblock": true, 00:29:19.819 "num_base_bdevs": 2, 00:29:19.819 "num_base_bdevs_discovered": 1, 00:29:19.819 "num_base_bdevs_operational": 1, 00:29:19.819 "base_bdevs_list": [ 00:29:19.819 { 00:29:19.819 "name": null, 00:29:19.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:19.819 "is_configured": false, 00:29:19.819 "data_offset": 0, 00:29:19.819 "data_size": 63488 00:29:19.819 }, 00:29:19.819 { 00:29:19.819 "name": "BaseBdev2", 00:29:19.819 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:19.819 "is_configured": true, 00:29:19.819 "data_offset": 2048, 00:29:19.819 "data_size": 63488 00:29:19.819 } 00:29:19.819 ] 00:29:19.819 }' 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:19.819 [2024-10-28 13:40:33.953577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:19.819 [2024-10-28 13:40:33.953972] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:19.819 [2024-10-28 13:40:33.954128] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:19.819 request: 00:29:19.819 { 00:29:19.819 "base_bdev": "BaseBdev1", 00:29:19.819 "raid_bdev": "raid_bdev1", 00:29:19.819 "method": "bdev_raid_add_base_bdev", 00:29:19.819 "req_id": 1 00:29:19.819 } 00:29:19.819 Got JSON-RPC error response 00:29:19.819 response: 00:29:19.819 { 00:29:19.819 "code": -22, 00:29:19.819 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:19.819 } 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:19.819 13:40:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:21.195 13:40:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.195 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:21.195 "name": "raid_bdev1", 00:29:21.195 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:21.195 "strip_size_kb": 0, 00:29:21.195 "state": "online", 00:29:21.195 "raid_level": "raid1", 00:29:21.195 "superblock": true, 00:29:21.195 "num_base_bdevs": 2, 00:29:21.195 "num_base_bdevs_discovered": 1, 00:29:21.195 "num_base_bdevs_operational": 1, 00:29:21.195 "base_bdevs_list": [ 00:29:21.195 { 00:29:21.195 "name": null, 00:29:21.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.195 "is_configured": false, 00:29:21.195 "data_offset": 0, 00:29:21.195 "data_size": 63488 00:29:21.195 }, 00:29:21.195 { 00:29:21.195 "name": "BaseBdev2", 00:29:21.195 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:21.195 "is_configured": true, 00:29:21.195 "data_offset": 2048, 00:29:21.195 "data_size": 63488 00:29:21.195 } 00:29:21.195 ] 00:29:21.195 }' 00:29:21.195 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:21.195 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:21.455 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:21.455 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:21.455 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:21.455 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:21.455 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:21.455 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:21.455 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.455 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.455 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:21.455 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.455 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:21.455 "name": "raid_bdev1", 00:29:21.455 "uuid": "0ed299ba-ad86-445b-9650-1586fb99fb90", 00:29:21.455 "strip_size_kb": 0, 00:29:21.455 "state": "online", 00:29:21.455 "raid_level": "raid1", 00:29:21.455 "superblock": true, 00:29:21.455 "num_base_bdevs": 2, 00:29:21.455 "num_base_bdevs_discovered": 1, 00:29:21.455 "num_base_bdevs_operational": 1, 00:29:21.455 "base_bdevs_list": [ 00:29:21.455 { 00:29:21.455 "name": null, 00:29:21.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.455 "is_configured": false, 00:29:21.455 "data_offset": 0, 00:29:21.455 "data_size": 63488 00:29:21.455 }, 00:29:21.455 { 00:29:21.455 "name": "BaseBdev2", 00:29:21.455 "uuid": "f9167d99-9831-5569-b09b-f785777a78f7", 00:29:21.455 "is_configured": true, 00:29:21.455 "data_offset": 2048, 00:29:21.455 "data_size": 63488 00:29:21.455 } 00:29:21.455 ] 00:29:21.455 }' 00:29:21.455 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:21.714 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:21.714 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:21.714 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:21.714 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88428 00:29:21.714 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88428 ']' 00:29:21.714 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88428 00:29:21.714 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:29:21.714 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:21.714 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88428 00:29:21.714 killing process with pid 88428 00:29:21.714 Received shutdown signal, test time was about 60.000000 seconds 00:29:21.714 00:29:21.714 Latency(us) 00:29:21.714 [2024-10-28T13:40:35.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.714 [2024-10-28T13:40:35.874Z] =================================================================================================================== 00:29:21.714 [2024-10-28T13:40:35.874Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:21.714 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:21.714 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:21.714 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88428' 00:29:21.714 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88428 00:29:21.714 [2024-10-28 13:40:35.709090] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:21.714 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88428 00:29:21.714 [2024-10-28 13:40:35.709281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:21.714 [2024-10-28 13:40:35.709352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:21.714 [2024-10-28 13:40:35.709372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:21.714 [2024-10-28 13:40:35.740227] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:21.973 13:40:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:29:21.973 00:29:21.973 real 0m25.597s 00:29:21.973 user 0m32.207s 00:29:21.973 sys 0m3.846s 00:29:21.973 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:21.973 ************************************ 00:29:21.973 END TEST raid_rebuild_test_sb 00:29:21.973 ************************************ 00:29:21.973 13:40:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:21.973 13:40:36 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:29:21.973 13:40:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:21.973 13:40:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:21.973 13:40:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:21.973 ************************************ 00:29:21.973 START TEST raid_rebuild_test_io 00:29:21.973 ************************************ 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89179 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89179 00:29:21.973 13:40:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89179 ']' 00:29:21.974 13:40:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.974 13:40:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:21.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.974 13:40:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.974 13:40:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:21.974 13:40:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.232 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:22.232 Zero copy mechanism will not be used. 00:29:22.232 [2024-10-28 13:40:36.159587] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:29:22.232 [2024-10-28 13:40:36.159802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89179 ] 00:29:22.232 [2024-10-28 13:40:36.314202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:22.232 [2024-10-28 13:40:36.346741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.490 [2024-10-28 13:40:36.393648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.490 [2024-10-28 13:40:36.454299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:22.490 [2024-10-28 13:40:36.454344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.057 BaseBdev1_malloc 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.057 [2024-10-28 13:40:37.133536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:23.057 [2024-10-28 13:40:37.133613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:23.057 [2024-10-28 13:40:37.133654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:23.057 [2024-10-28 13:40:37.133683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:23.057 [2024-10-28 13:40:37.136801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:23.057 [2024-10-28 13:40:37.136852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:23.057 BaseBdev1 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.057 BaseBdev2_malloc 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.057 [2024-10-28 13:40:37.157605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:23.057 [2024-10-28 13:40:37.157839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:23.057 [2024-10-28 13:40:37.157879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:23.057 [2024-10-28 13:40:37.157907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:23.057 [2024-10-28 13:40:37.160947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:23.057 [2024-10-28 13:40:37.161124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:23.057 BaseBdev2 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:29:23.057 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.058 spare_malloc 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.058 spare_delay 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.058 [2024-10-28 13:40:37.193986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:23.058 [2024-10-28 13:40:37.194058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:23.058 [2024-10-28 13:40:37.194088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:23.058 [2024-10-28 13:40:37.194111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:23.058 [2024-10-28 13:40:37.197094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:23.058 [2024-10-28 13:40:37.197279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:23.058 spare 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.058 [2024-10-28 13:40:37.202109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:23.058 [2024-10-28 13:40:37.204787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:23.058 [2024-10-28 13:40:37.205055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:29:23.058 [2024-10-28 13:40:37.205081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:23.058 [2024-10-28 13:40:37.205450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:23.058 [2024-10-28 13:40:37.205678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:29:23.058 [2024-10-28 13:40:37.205697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:29:23.058 [2024-10-28 13:40:37.205890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.058 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.317 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.317 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:23.317 "name": "raid_bdev1", 00:29:23.317 "uuid": "0d6f1322-6de2-451a-9fb6-3b2d47700724", 00:29:23.317 "strip_size_kb": 0, 00:29:23.317 "state": "online", 00:29:23.317 "raid_level": "raid1", 00:29:23.317 "superblock": false, 00:29:23.317 "num_base_bdevs": 2, 00:29:23.317 "num_base_bdevs_discovered": 2, 00:29:23.317 "num_base_bdevs_operational": 2, 00:29:23.317 "base_bdevs_list": [ 00:29:23.317 { 00:29:23.317 "name": "BaseBdev1", 00:29:23.317 "uuid": "b23bc010-037e-5b93-87dd-e2fdf6f07645", 00:29:23.317 "is_configured": true, 00:29:23.317 "data_offset": 0, 00:29:23.317 "data_size": 65536 00:29:23.317 }, 00:29:23.317 { 00:29:23.317 "name": "BaseBdev2", 00:29:23.317 "uuid": "cffd7720-ee1c-5fb5-b9cf-8421ada2cdee", 00:29:23.317 "is_configured": true, 00:29:23.317 "data_offset": 0, 00:29:23.317 "data_size": 65536 00:29:23.317 } 00:29:23.317 ] 00:29:23.317 }' 00:29:23.317 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:23.317 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.575 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:23.575 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:23.575 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.575 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.834 [2024-10-28 13:40:37.734723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.834 [2024-10-28 13:40:37.826290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:23.834 "name": "raid_bdev1", 00:29:23.834 "uuid": "0d6f1322-6de2-451a-9fb6-3b2d47700724", 00:29:23.834 "strip_size_kb": 0, 00:29:23.834 "state": "online", 00:29:23.834 "raid_level": "raid1", 00:29:23.834 "superblock": false, 00:29:23.834 "num_base_bdevs": 2, 00:29:23.834 "num_base_bdevs_discovered": 1, 00:29:23.834 "num_base_bdevs_operational": 1, 00:29:23.834 "base_bdevs_list": [ 00:29:23.834 { 00:29:23.834 "name": null, 00:29:23.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.834 "is_configured": false, 00:29:23.834 "data_offset": 0, 00:29:23.834 "data_size": 65536 00:29:23.834 }, 00:29:23.834 { 00:29:23.834 "name": "BaseBdev2", 00:29:23.834 "uuid": "cffd7720-ee1c-5fb5-b9cf-8421ada2cdee", 00:29:23.834 "is_configured": true, 00:29:23.834 "data_offset": 0, 00:29:23.834 "data_size": 65536 00:29:23.834 } 00:29:23.834 ] 00:29:23.834 }' 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:23.834 13:40:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:23.834 [2024-10-28 13:40:37.945043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:29:23.834 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:23.834 Zero copy mechanism will not be used. 00:29:23.834 Running I/O for 60 seconds... 00:29:24.411 13:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:24.411 13:40:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.411 13:40:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:24.411 [2024-10-28 13:40:38.373286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:24.411 13:40:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.411 13:40:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:29:24.411 [2024-10-28 13:40:38.449106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:29:24.411 [2024-10-28 13:40:38.451995] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:24.690 [2024-10-28 13:40:38.594995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:24.690 [2024-10-28 13:40:38.738334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:24.690 [2024-10-28 13:40:38.738955] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:24.947 160.00 IOPS, 480.00 MiB/s [2024-10-28T13:40:39.107Z] [2024-10-28 13:40:38.970011] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:25.205 [2024-10-28 13:40:39.187292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:25.464 [2024-10-28 13:40:39.422769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:25.464 [2024-10-28 13:40:39.423707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:25.464 "name": "raid_bdev1", 00:29:25.464 "uuid": "0d6f1322-6de2-451a-9fb6-3b2d47700724", 00:29:25.464 "strip_size_kb": 0, 00:29:25.464 "state": "online", 00:29:25.464 "raid_level": "raid1", 00:29:25.464 "superblock": false, 00:29:25.464 "num_base_bdevs": 2, 00:29:25.464 "num_base_bdevs_discovered": 2, 00:29:25.464 "num_base_bdevs_operational": 2, 00:29:25.464 "process": { 00:29:25.464 "type": "rebuild", 00:29:25.464 "target": "spare", 00:29:25.464 "progress": { 00:29:25.464 "blocks": 14336, 00:29:25.464 "percent": 21 00:29:25.464 } 00:29:25.464 }, 00:29:25.464 "base_bdevs_list": [ 00:29:25.464 { 00:29:25.464 "name": "spare", 00:29:25.464 "uuid": "198de6a7-fa7e-5c90-884e-04804566178f", 00:29:25.464 "is_configured": true, 00:29:25.464 "data_offset": 0, 00:29:25.464 "data_size": 65536 00:29:25.464 }, 00:29:25.464 { 00:29:25.464 "name": "BaseBdev2", 00:29:25.464 "uuid": "cffd7720-ee1c-5fb5-b9cf-8421ada2cdee", 00:29:25.464 "is_configured": true, 00:29:25.464 "data_offset": 0, 00:29:25.464 "data_size": 65536 00:29:25.464 } 00:29:25.464 ] 00:29:25.464 }' 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.464 13:40:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:25.464 [2024-10-28 13:40:39.598756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:25.721 [2024-10-28 13:40:39.644594] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:25.721 [2024-10-28 13:40:39.678727] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:25.721 [2024-10-28 13:40:39.681222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:25.721 [2024-10-28 13:40:39.681372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:25.721 [2024-10-28 13:40:39.681432] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:25.721 [2024-10-28 13:40:39.713744] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006490 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:25.721 "name": "raid_bdev1", 00:29:25.721 "uuid": "0d6f1322-6de2-451a-9fb6-3b2d47700724", 00:29:25.721 "strip_size_kb": 0, 00:29:25.721 "state": "online", 00:29:25.721 "raid_level": "raid1", 00:29:25.721 "superblock": false, 00:29:25.721 "num_base_bdevs": 2, 00:29:25.721 "num_base_bdevs_discovered": 1, 00:29:25.721 "num_base_bdevs_operational": 1, 00:29:25.721 "base_bdevs_list": [ 00:29:25.721 { 00:29:25.721 "name": null, 00:29:25.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:25.721 "is_configured": false, 00:29:25.721 "data_offset": 0, 00:29:25.721 "data_size": 65536 00:29:25.721 }, 00:29:25.721 { 00:29:25.721 "name": "BaseBdev2", 00:29:25.721 "uuid": "cffd7720-ee1c-5fb5-b9cf-8421ada2cdee", 00:29:25.721 "is_configured": true, 00:29:25.721 "data_offset": 0, 00:29:25.721 "data_size": 65536 00:29:25.721 } 00:29:25.721 ] 00:29:25.721 }' 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:25.721 13:40:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.238 145.00 IOPS, 435.00 MiB/s [2024-10-28T13:40:40.398Z] 13:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:26.238 13:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:26.238 13:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:26.238 13:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:26.238 13:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:26.238 13:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:26.238 13:40:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.238 13:40:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.238 13:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.238 13:40:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.238 13:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:26.238 "name": "raid_bdev1", 00:29:26.238 "uuid": "0d6f1322-6de2-451a-9fb6-3b2d47700724", 00:29:26.238 "strip_size_kb": 0, 00:29:26.238 "state": "online", 00:29:26.238 "raid_level": "raid1", 00:29:26.238 "superblock": false, 00:29:26.238 "num_base_bdevs": 2, 00:29:26.238 "num_base_bdevs_discovered": 1, 00:29:26.238 "num_base_bdevs_operational": 1, 00:29:26.238 "base_bdevs_list": [ 00:29:26.238 { 00:29:26.238 "name": null, 00:29:26.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.238 "is_configured": false, 00:29:26.238 "data_offset": 0, 00:29:26.238 "data_size": 65536 00:29:26.238 }, 00:29:26.238 { 00:29:26.238 "name": "BaseBdev2", 00:29:26.238 "uuid": "cffd7720-ee1c-5fb5-b9cf-8421ada2cdee", 00:29:26.238 "is_configured": true, 00:29:26.238 "data_offset": 0, 00:29:26.238 "data_size": 65536 00:29:26.238 } 00:29:26.238 ] 00:29:26.238 }' 00:29:26.238 13:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:26.238 13:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:26.238 13:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:26.496 13:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:26.496 13:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:26.496 13:40:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.496 13:40:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:26.496 [2024-10-28 13:40:40.436229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:26.496 13:40:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.496 13:40:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:29:26.496 [2024-10-28 13:40:40.497008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:29:26.496 [2024-10-28 13:40:40.499683] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:26.496 [2024-10-28 13:40:40.625052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:26.496 [2024-10-28 13:40:40.625886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:26.754 [2024-10-28 13:40:40.844041] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:26.754 [2024-10-28 13:40:40.844645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:27.013 148.00 IOPS, 444.00 MiB/s [2024-10-28T13:40:41.173Z] [2024-10-28 13:40:41.077657] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:27.272 [2024-10-28 13:40:41.187124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:27.272 [2024-10-28 13:40:41.422659] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:27.272 [2024-10-28 13:40:41.423549] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:27.531 "name": "raid_bdev1", 00:29:27.531 "uuid": "0d6f1322-6de2-451a-9fb6-3b2d47700724", 00:29:27.531 "strip_size_kb": 0, 00:29:27.531 "state": "online", 00:29:27.531 "raid_level": "raid1", 00:29:27.531 "superblock": false, 00:29:27.531 "num_base_bdevs": 2, 00:29:27.531 "num_base_bdevs_discovered": 2, 00:29:27.531 "num_base_bdevs_operational": 2, 00:29:27.531 "process": { 00:29:27.531 "type": "rebuild", 00:29:27.531 "target": "spare", 00:29:27.531 "progress": { 00:29:27.531 "blocks": 14336, 00:29:27.531 "percent": 21 00:29:27.531 } 00:29:27.531 }, 00:29:27.531 "base_bdevs_list": [ 00:29:27.531 { 00:29:27.531 "name": "spare", 00:29:27.531 "uuid": "198de6a7-fa7e-5c90-884e-04804566178f", 00:29:27.531 "is_configured": true, 00:29:27.531 "data_offset": 0, 00:29:27.531 "data_size": 65536 00:29:27.531 }, 00:29:27.531 { 00:29:27.531 "name": "BaseBdev2", 00:29:27.531 "uuid": "cffd7720-ee1c-5fb5-b9cf-8421ada2cdee", 00:29:27.531 "is_configured": true, 00:29:27.531 "data_offset": 0, 00:29:27.531 "data_size": 65536 00:29:27.531 } 00:29:27.531 ] 00:29:27.531 }' 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:27.531 [2024-10-28 13:40:41.550217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=375 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:27.531 "name": "raid_bdev1", 00:29:27.531 "uuid": "0d6f1322-6de2-451a-9fb6-3b2d47700724", 00:29:27.531 "strip_size_kb": 0, 00:29:27.531 "state": "online", 00:29:27.531 "raid_level": "raid1", 00:29:27.531 "superblock": false, 00:29:27.531 "num_base_bdevs": 2, 00:29:27.531 "num_base_bdevs_discovered": 2, 00:29:27.531 "num_base_bdevs_operational": 2, 00:29:27.531 "process": { 00:29:27.531 "type": "rebuild", 00:29:27.531 "target": "spare", 00:29:27.531 "progress": { 00:29:27.531 "blocks": 16384, 00:29:27.531 "percent": 25 00:29:27.531 } 00:29:27.531 }, 00:29:27.531 "base_bdevs_list": [ 00:29:27.531 { 00:29:27.531 "name": "spare", 00:29:27.531 "uuid": "198de6a7-fa7e-5c90-884e-04804566178f", 00:29:27.531 "is_configured": true, 00:29:27.531 "data_offset": 0, 00:29:27.531 "data_size": 65536 00:29:27.531 }, 00:29:27.531 { 00:29:27.531 "name": "BaseBdev2", 00:29:27.531 "uuid": "cffd7720-ee1c-5fb5-b9cf-8421ada2cdee", 00:29:27.531 "is_configured": true, 00:29:27.531 "data_offset": 0, 00:29:27.531 "data_size": 65536 00:29:27.531 } 00:29:27.531 ] 00:29:27.531 }' 00:29:27.531 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:27.790 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:27.790 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:27.790 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:27.790 13:40:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:27.790 [2024-10-28 13:40:41.915827] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:27.790 [2024-10-28 13:40:41.916465] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:28.049 127.75 IOPS, 383.25 MiB/s [2024-10-28T13:40:42.210Z] [2024-10-28 13:40:42.136658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:28.308 [2024-10-28 13:40:42.342864] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:29:28.565 [2024-10-28 13:40:42.679364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:28.823 [2024-10-28 13:40:42.788497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:28.823 "name": "raid_bdev1", 00:29:28.823 "uuid": "0d6f1322-6de2-451a-9fb6-3b2d47700724", 00:29:28.823 "strip_size_kb": 0, 00:29:28.823 "state": "online", 00:29:28.823 "raid_level": "raid1", 00:29:28.823 "superblock": false, 00:29:28.823 "num_base_bdevs": 2, 00:29:28.823 "num_base_bdevs_discovered": 2, 00:29:28.823 "num_base_bdevs_operational": 2, 00:29:28.823 "process": { 00:29:28.823 "type": "rebuild", 00:29:28.823 "target": "spare", 00:29:28.823 "progress": { 00:29:28.823 "blocks": 34816, 00:29:28.823 "percent": 53 00:29:28.823 } 00:29:28.823 }, 00:29:28.823 "base_bdevs_list": [ 00:29:28.823 { 00:29:28.823 "name": "spare", 00:29:28.823 "uuid": "198de6a7-fa7e-5c90-884e-04804566178f", 00:29:28.823 "is_configured": true, 00:29:28.823 "data_offset": 0, 00:29:28.823 "data_size": 65536 00:29:28.823 }, 00:29:28.823 { 00:29:28.823 "name": "BaseBdev2", 00:29:28.823 "uuid": "cffd7720-ee1c-5fb5-b9cf-8421ada2cdee", 00:29:28.823 "is_configured": true, 00:29:28.823 "data_offset": 0, 00:29:28.823 "data_size": 65536 00:29:28.823 } 00:29:28.823 ] 00:29:28.823 }' 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:28.823 13:40:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:29.080 113.60 IOPS, 340.80 MiB/s [2024-10-28T13:40:43.240Z] [2024-10-28 13:40:43.136368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:29:29.660 [2024-10-28 13:40:43.659677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:29:29.660 [2024-10-28 13:40:43.660028] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:29:29.929 101.67 IOPS, 305.00 MiB/s [2024-10-28T13:40:44.089Z] 13:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:29.929 13:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:29.929 13:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:29.929 13:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:29.929 13:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:29.929 13:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:29.929 13:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.929 13:40:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.929 13:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.929 13:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:29.929 13:40:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.929 13:40:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:29.929 "name": "raid_bdev1", 00:29:29.929 "uuid": "0d6f1322-6de2-451a-9fb6-3b2d47700724", 00:29:29.929 "strip_size_kb": 0, 00:29:29.929 "state": "online", 00:29:29.929 "raid_level": "raid1", 00:29:29.929 "superblock": false, 00:29:29.929 "num_base_bdevs": 2, 00:29:29.929 "num_base_bdevs_discovered": 2, 00:29:29.929 "num_base_bdevs_operational": 2, 00:29:29.929 "process": { 00:29:29.929 "type": "rebuild", 00:29:29.929 "target": "spare", 00:29:29.929 "progress": { 00:29:29.929 "blocks": 49152, 00:29:29.929 "percent": 75 00:29:29.929 } 00:29:29.929 }, 00:29:29.929 "base_bdevs_list": [ 00:29:29.929 { 00:29:29.929 "name": "spare", 00:29:29.929 "uuid": "198de6a7-fa7e-5c90-884e-04804566178f", 00:29:29.929 "is_configured": true, 00:29:29.929 "data_offset": 0, 00:29:29.929 "data_size": 65536 00:29:29.929 }, 00:29:29.929 { 00:29:29.929 "name": "BaseBdev2", 00:29:29.929 "uuid": "cffd7720-ee1c-5fb5-b9cf-8421ada2cdee", 00:29:29.929 "is_configured": true, 00:29:29.929 "data_offset": 0, 00:29:29.929 "data_size": 65536 00:29:29.929 } 00:29:29.929 ] 00:29:29.929 }' 00:29:29.929 13:40:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:29.929 13:40:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:29.929 13:40:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:30.187 13:40:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:30.187 13:40:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:30.187 [2024-10-28 13:40:44.335636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:29:30.445 [2024-10-28 13:40:44.462000] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:29:31.011 [2024-10-28 13:40:44.901421] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:31.011 93.00 IOPS, 279.00 MiB/s [2024-10-28T13:40:45.171Z] [2024-10-28 13:40:45.009039] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:31.011 [2024-10-28 13:40:45.011715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:31.011 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:31.011 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:31.011 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:31.012 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:31.012 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:31.012 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:31.012 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:31.012 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:31.012 13:40:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.012 13:40:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:31.012 13:40:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:31.270 "name": "raid_bdev1", 00:29:31.270 "uuid": "0d6f1322-6de2-451a-9fb6-3b2d47700724", 00:29:31.270 "strip_size_kb": 0, 00:29:31.270 "state": "online", 00:29:31.270 "raid_level": "raid1", 00:29:31.270 "superblock": false, 00:29:31.270 "num_base_bdevs": 2, 00:29:31.270 "num_base_bdevs_discovered": 2, 00:29:31.270 "num_base_bdevs_operational": 2, 00:29:31.270 "base_bdevs_list": [ 00:29:31.270 { 00:29:31.270 "name": "spare", 00:29:31.270 "uuid": "198de6a7-fa7e-5c90-884e-04804566178f", 00:29:31.270 "is_configured": true, 00:29:31.270 "data_offset": 0, 00:29:31.270 "data_size": 65536 00:29:31.270 }, 00:29:31.270 { 00:29:31.270 "name": "BaseBdev2", 00:29:31.270 "uuid": "cffd7720-ee1c-5fb5-b9cf-8421ada2cdee", 00:29:31.270 "is_configured": true, 00:29:31.270 "data_offset": 0, 00:29:31.270 "data_size": 65536 00:29:31.270 } 00:29:31.270 ] 00:29:31.270 }' 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:31.270 "name": "raid_bdev1", 00:29:31.270 "uuid": "0d6f1322-6de2-451a-9fb6-3b2d47700724", 00:29:31.270 "strip_size_kb": 0, 00:29:31.270 "state": "online", 00:29:31.270 "raid_level": "raid1", 00:29:31.270 "superblock": false, 00:29:31.270 "num_base_bdevs": 2, 00:29:31.270 "num_base_bdevs_discovered": 2, 00:29:31.270 "num_base_bdevs_operational": 2, 00:29:31.270 "base_bdevs_list": [ 00:29:31.270 { 00:29:31.270 "name": "spare", 00:29:31.270 "uuid": "198de6a7-fa7e-5c90-884e-04804566178f", 00:29:31.270 "is_configured": true, 00:29:31.270 "data_offset": 0, 00:29:31.270 "data_size": 65536 00:29:31.270 }, 00:29:31.270 { 00:29:31.270 "name": "BaseBdev2", 00:29:31.270 "uuid": "cffd7720-ee1c-5fb5-b9cf-8421ada2cdee", 00:29:31.270 "is_configured": true, 00:29:31.270 "data_offset": 0, 00:29:31.270 "data_size": 65536 00:29:31.270 } 00:29:31.270 ] 00:29:31.270 }' 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:31.270 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:31.529 "name": "raid_bdev1", 00:29:31.529 "uuid": "0d6f1322-6de2-451a-9fb6-3b2d47700724", 00:29:31.529 "strip_size_kb": 0, 00:29:31.529 "state": "online", 00:29:31.529 "raid_level": "raid1", 00:29:31.529 "superblock": false, 00:29:31.529 "num_base_bdevs": 2, 00:29:31.529 "num_base_bdevs_discovered": 2, 00:29:31.529 "num_base_bdevs_operational": 2, 00:29:31.529 "base_bdevs_list": [ 00:29:31.529 { 00:29:31.529 "name": "spare", 00:29:31.529 "uuid": "198de6a7-fa7e-5c90-884e-04804566178f", 00:29:31.529 "is_configured": true, 00:29:31.529 "data_offset": 0, 00:29:31.529 "data_size": 65536 00:29:31.529 }, 00:29:31.529 { 00:29:31.529 "name": "BaseBdev2", 00:29:31.529 "uuid": "cffd7720-ee1c-5fb5-b9cf-8421ada2cdee", 00:29:31.529 "is_configured": true, 00:29:31.529 "data_offset": 0, 00:29:31.529 "data_size": 65536 00:29:31.529 } 00:29:31.529 ] 00:29:31.529 }' 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:31.529 13:40:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.096 84.00 IOPS, 252.00 MiB/s [2024-10-28T13:40:46.256Z] 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.096 [2024-10-28 13:40:46.019298] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:32.096 [2024-10-28 13:40:46.019332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:32.096 00:29:32.096 Latency(us) 00:29:32.096 [2024-10-28T13:40:46.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.096 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:32.096 raid_bdev1 : 8.17 82.89 248.67 0.00 0.00 15062.29 255.07 111530.36 00:29:32.096 [2024-10-28T13:40:46.256Z] =================================================================================================================== 00:29:32.096 [2024-10-28T13:40:46.256Z] Total : 82.89 248.67 0.00 0.00 15062.29 255.07 111530.36 00:29:32.096 { 00:29:32.096 "results": [ 00:29:32.096 { 00:29:32.096 "job": "raid_bdev1", 00:29:32.096 "core_mask": "0x1", 00:29:32.096 "workload": "randrw", 00:29:32.096 "percentage": 50, 00:29:32.096 "status": "finished", 00:29:32.096 "queue_depth": 2, 00:29:32.096 "io_size": 3145728, 00:29:32.096 "runtime": 8.167373, 00:29:32.096 "iops": 82.89078997616492, 00:29:32.096 "mibps": 248.6723699284948, 00:29:32.096 "io_failed": 0, 00:29:32.096 "io_timeout": 0, 00:29:32.096 "avg_latency_us": 15062.289222505708, 00:29:32.096 "min_latency_us": 255.0690909090909, 00:29:32.096 "max_latency_us": 111530.35636363637 00:29:32.096 } 00:29:32.096 ], 00:29:32.096 "core_count": 1 00:29:32.096 } 00:29:32.096 [2024-10-28 13:40:46.120403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:32.096 [2024-10-28 13:40:46.120472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:32.096 [2024-10-28 13:40:46.120588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:32.096 [2024-10-28 13:40:46.120605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:32.096 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:29:32.354 /dev/nbd0 00:29:32.354 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:32.354 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:32.354 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:32.354 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:29:32.354 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:32.354 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:32.354 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:32.613 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:32.614 1+0 records in 00:29:32.614 1+0 records out 00:29:32.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357209 s, 11.5 MB/s 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:32.614 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:29:32.872 /dev/nbd1 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:32.872 1+0 records in 00:29:32.872 1+0 records out 00:29:32.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414341 s, 9.9 MB/s 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:32.872 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:32.873 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:32.873 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:29:32.873 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:32.873 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:32.873 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:32.873 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:32.873 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:32.873 13:40:46 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:33.131 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:33.389 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:33.389 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:33.389 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:33.389 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:33.389 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:33.389 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:33.390 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:33.390 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:33.390 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:29:33.390 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89179 00:29:33.390 13:40:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89179 ']' 00:29:33.390 13:40:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89179 00:29:33.390 13:40:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:29:33.390 13:40:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:33.390 13:40:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89179 00:29:33.390 killing process with pid 89179 00:29:33.390 Received shutdown signal, test time was about 9.564167 seconds 00:29:33.390 00:29:33.390 Latency(us) 00:29:33.390 [2024-10-28T13:40:47.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.390 [2024-10-28T13:40:47.550Z] =================================================================================================================== 00:29:33.390 [2024-10-28T13:40:47.550Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:33.390 13:40:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:33.390 13:40:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:33.390 13:40:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89179' 00:29:33.390 13:40:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89179 00:29:33.390 [2024-10-28 13:40:47.512001] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:33.390 13:40:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89179 00:29:33.390 [2024-10-28 13:40:47.540066] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:33.649 ************************************ 00:29:33.649 END TEST raid_rebuild_test_io 00:29:33.649 ************************************ 00:29:33.649 13:40:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:29:33.649 00:29:33.649 real 0m11.741s 00:29:33.649 user 0m15.827s 00:29:33.649 sys 0m1.382s 00:29:33.649 13:40:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:33.649 13:40:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:33.907 13:40:47 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:29:33.907 13:40:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:33.907 13:40:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:33.907 13:40:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:33.907 ************************************ 00:29:33.907 START TEST raid_rebuild_test_sb_io 00:29:33.907 ************************************ 00:29:33.907 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:29:33.907 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:29:33.907 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:29:33.907 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:29:33.907 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:29:33.907 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:33.907 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:33.907 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:33.907 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:33.907 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:33.907 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:33.907 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:33.907 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89554 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89554 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89554 ']' 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:33.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:33.908 13:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:33.908 [2024-10-28 13:40:47.955640] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:29:33.908 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:33.908 Zero copy mechanism will not be used. 00:29:33.908 [2024-10-28 13:40:47.956010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89554 ] 00:29:34.166 [2024-10-28 13:40:48.109780] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:34.166 [2024-10-28 13:40:48.141994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.166 [2024-10-28 13:40:48.191894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.166 [2024-10-28 13:40:48.253142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:34.166 [2024-10-28 13:40:48.253483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.101 BaseBdev1_malloc 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.101 [2024-10-28 13:40:48.979844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:35.101 [2024-10-28 13:40:48.979930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.101 [2024-10-28 13:40:48.979973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:35.101 [2024-10-28 13:40:48.979997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.101 [2024-10-28 13:40:48.983069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.101 [2024-10-28 13:40:48.983135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:35.101 BaseBdev1 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.101 13:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.101 BaseBdev2_malloc 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.101 [2024-10-28 13:40:49.008117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:35.101 [2024-10-28 13:40:49.008376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.101 [2024-10-28 13:40:49.008539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:35.101 [2024-10-28 13:40:49.008715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.101 [2024-10-28 13:40:49.011746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.101 [2024-10-28 13:40:49.011801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:35.101 BaseBdev2 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.101 spare_malloc 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.101 spare_delay 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.101 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.101 [2024-10-28 13:40:49.048763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:35.101 [2024-10-28 13:40:49.048999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.101 [2024-10-28 13:40:49.049040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:35.102 [2024-10-28 13:40:49.049062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.102 [2024-10-28 13:40:49.052079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.102 [2024-10-28 13:40:49.052363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:35.102 spare 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.102 [2024-10-28 13:40:49.057009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:35.102 [2024-10-28 13:40:49.059754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:35.102 [2024-10-28 13:40:49.060014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:29:35.102 [2024-10-28 13:40:49.060043] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:35.102 [2024-10-28 13:40:49.060457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:35.102 [2024-10-28 13:40:49.060690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:29:35.102 [2024-10-28 13:40:49.060707] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:29:35.102 [2024-10-28 13:40:49.060956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:35.102 "name": "raid_bdev1", 00:29:35.102 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:35.102 "strip_size_kb": 0, 00:29:35.102 "state": "online", 00:29:35.102 "raid_level": "raid1", 00:29:35.102 "superblock": true, 00:29:35.102 "num_base_bdevs": 2, 00:29:35.102 "num_base_bdevs_discovered": 2, 00:29:35.102 "num_base_bdevs_operational": 2, 00:29:35.102 "base_bdevs_list": [ 00:29:35.102 { 00:29:35.102 "name": "BaseBdev1", 00:29:35.102 "uuid": "7e86fbc4-d82e-5565-83b4-6fda99ea2959", 00:29:35.102 "is_configured": true, 00:29:35.102 "data_offset": 2048, 00:29:35.102 "data_size": 63488 00:29:35.102 }, 00:29:35.102 { 00:29:35.102 "name": "BaseBdev2", 00:29:35.102 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:35.102 "is_configured": true, 00:29:35.102 "data_offset": 2048, 00:29:35.102 "data_size": 63488 00:29:35.102 } 00:29:35.102 ] 00:29:35.102 }' 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:35.102 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:35.692 [2024-10-28 13:40:49.609634] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.692 [2024-10-28 13:40:49.713241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:35.692 "name": "raid_bdev1", 00:29:35.692 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:35.692 "strip_size_kb": 0, 00:29:35.692 "state": "online", 00:29:35.692 "raid_level": "raid1", 00:29:35.692 "superblock": true, 00:29:35.692 "num_base_bdevs": 2, 00:29:35.692 "num_base_bdevs_discovered": 1, 00:29:35.692 "num_base_bdevs_operational": 1, 00:29:35.692 "base_bdevs_list": [ 00:29:35.692 { 00:29:35.692 "name": null, 00:29:35.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:35.692 "is_configured": false, 00:29:35.692 "data_offset": 0, 00:29:35.692 "data_size": 63488 00:29:35.692 }, 00:29:35.692 { 00:29:35.692 "name": "BaseBdev2", 00:29:35.692 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:35.692 "is_configured": true, 00:29:35.692 "data_offset": 2048, 00:29:35.692 "data_size": 63488 00:29:35.692 } 00:29:35.692 ] 00:29:35.692 }' 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:35.692 13:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:35.952 [2024-10-28 13:40:49.840067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:29:35.952 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:35.952 Zero copy mechanism will not be used. 00:29:35.952 Running I/O for 60 seconds... 00:29:36.210 13:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:36.210 13:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.210 13:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:36.210 [2024-10-28 13:40:50.281244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:36.210 13:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.210 13:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:29:36.210 [2024-10-28 13:40:50.358648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:29:36.210 [2024-10-28 13:40:50.361506] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:36.469 [2024-10-28 13:40:50.496066] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:36.469 [2024-10-28 13:40:50.496833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:36.728 [2024-10-28 13:40:50.717306] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:36.728 [2024-10-28 13:40:50.717979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:36.986 159.00 IOPS, 477.00 MiB/s [2024-10-28T13:40:51.146Z] [2024-10-28 13:40:51.059991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:36.986 [2024-10-28 13:40:51.060880] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:37.246 [2024-10-28 13:40:51.173249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:37.246 [2024-10-28 13:40:51.173881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:37.246 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:37.246 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:37.246 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:37.246 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:37.246 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:37.246 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.246 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.246 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.246 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.246 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.246 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:37.246 "name": "raid_bdev1", 00:29:37.246 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:37.246 "strip_size_kb": 0, 00:29:37.246 "state": "online", 00:29:37.246 "raid_level": "raid1", 00:29:37.246 "superblock": true, 00:29:37.246 "num_base_bdevs": 2, 00:29:37.246 "num_base_bdevs_discovered": 2, 00:29:37.246 "num_base_bdevs_operational": 2, 00:29:37.246 "process": { 00:29:37.246 "type": "rebuild", 00:29:37.246 "target": "spare", 00:29:37.246 "progress": { 00:29:37.246 "blocks": 10240, 00:29:37.246 "percent": 16 00:29:37.246 } 00:29:37.246 }, 00:29:37.246 "base_bdevs_list": [ 00:29:37.246 { 00:29:37.246 "name": "spare", 00:29:37.246 "uuid": "073df4e9-712e-5023-86b1-d79f6ab1cbed", 00:29:37.246 "is_configured": true, 00:29:37.246 "data_offset": 2048, 00:29:37.246 "data_size": 63488 00:29:37.246 }, 00:29:37.246 { 00:29:37.246 "name": "BaseBdev2", 00:29:37.246 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:37.246 "is_configured": true, 00:29:37.246 "data_offset": 2048, 00:29:37.246 "data_size": 63488 00:29:37.246 } 00:29:37.246 ] 00:29:37.246 }' 00:29:37.246 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.505 [2024-10-28 13:40:51.504977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:37.505 [2024-10-28 13:40:51.513618] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:37.505 [2024-10-28 13:40:51.514283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:37.505 [2024-10-28 13:40:51.515249] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:37.505 [2024-10-28 13:40:51.531257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:37.505 [2024-10-28 13:40:51.531295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:37.505 [2024-10-28 13:40:51.531313] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:37.505 [2024-10-28 13:40:51.547007] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006490 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.505 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:37.505 "name": "raid_bdev1", 00:29:37.505 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:37.505 "strip_size_kb": 0, 00:29:37.505 "state": "online", 00:29:37.505 "raid_level": "raid1", 00:29:37.506 "superblock": true, 00:29:37.506 "num_base_bdevs": 2, 00:29:37.506 "num_base_bdevs_discovered": 1, 00:29:37.506 "num_base_bdevs_operational": 1, 00:29:37.506 "base_bdevs_list": [ 00:29:37.506 { 00:29:37.506 "name": null, 00:29:37.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.506 "is_configured": false, 00:29:37.506 "data_offset": 0, 00:29:37.506 "data_size": 63488 00:29:37.506 }, 00:29:37.506 { 00:29:37.506 "name": "BaseBdev2", 00:29:37.506 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:37.506 "is_configured": true, 00:29:37.506 "data_offset": 2048, 00:29:37.506 "data_size": 63488 00:29:37.506 } 00:29:37.506 ] 00:29:37.506 }' 00:29:37.506 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:37.506 13:40:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:38.023 143.00 IOPS, 429.00 MiB/s [2024-10-28T13:40:52.183Z] 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:38.024 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:38.024 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:38.024 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:38.024 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:38.024 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:38.024 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.024 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.024 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:38.024 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.024 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:38.024 "name": "raid_bdev1", 00:29:38.024 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:38.024 "strip_size_kb": 0, 00:29:38.024 "state": "online", 00:29:38.024 "raid_level": "raid1", 00:29:38.024 "superblock": true, 00:29:38.024 "num_base_bdevs": 2, 00:29:38.024 "num_base_bdevs_discovered": 1, 00:29:38.024 "num_base_bdevs_operational": 1, 00:29:38.024 "base_bdevs_list": [ 00:29:38.024 { 00:29:38.024 "name": null, 00:29:38.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:38.024 "is_configured": false, 00:29:38.024 "data_offset": 0, 00:29:38.024 "data_size": 63488 00:29:38.024 }, 00:29:38.024 { 00:29:38.024 "name": "BaseBdev2", 00:29:38.024 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:38.024 "is_configured": true, 00:29:38.024 "data_offset": 2048, 00:29:38.024 "data_size": 63488 00:29:38.024 } 00:29:38.024 ] 00:29:38.024 }' 00:29:38.024 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:38.283 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:38.283 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:38.283 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:38.283 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:38.283 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.283 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:38.283 [2024-10-28 13:40:52.253548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:38.283 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.283 13:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:29:38.283 [2024-10-28 13:40:52.291540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:29:38.283 [2024-10-28 13:40:52.294377] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:38.283 [2024-10-28 13:40:52.406633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:38.283 [2024-10-28 13:40:52.407041] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:38.541 [2024-10-28 13:40:52.539881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:38.541 [2024-10-28 13:40:52.540460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:38.799 150.00 IOPS, 450.00 MiB/s [2024-10-28T13:40:52.959Z] [2024-10-28 13:40:52.876670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:39.057 [2024-10-28 13:40:53.029034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:39.316 "name": "raid_bdev1", 00:29:39.316 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:39.316 "strip_size_kb": 0, 00:29:39.316 "state": "online", 00:29:39.316 "raid_level": "raid1", 00:29:39.316 "superblock": true, 00:29:39.316 "num_base_bdevs": 2, 00:29:39.316 "num_base_bdevs_discovered": 2, 00:29:39.316 "num_base_bdevs_operational": 2, 00:29:39.316 "process": { 00:29:39.316 "type": "rebuild", 00:29:39.316 "target": "spare", 00:29:39.316 "progress": { 00:29:39.316 "blocks": 12288, 00:29:39.316 "percent": 19 00:29:39.316 } 00:29:39.316 }, 00:29:39.316 "base_bdevs_list": [ 00:29:39.316 { 00:29:39.316 "name": "spare", 00:29:39.316 "uuid": "073df4e9-712e-5023-86b1-d79f6ab1cbed", 00:29:39.316 "is_configured": true, 00:29:39.316 "data_offset": 2048, 00:29:39.316 "data_size": 63488 00:29:39.316 }, 00:29:39.316 { 00:29:39.316 "name": "BaseBdev2", 00:29:39.316 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:39.316 "is_configured": true, 00:29:39.316 "data_offset": 2048, 00:29:39.316 "data_size": 63488 00:29:39.316 } 00:29:39.316 ] 00:29:39.316 }' 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:39.316 [2024-10-28 13:40:53.401643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:29:39.316 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=387 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:39.316 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.575 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.575 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:39.575 "name": "raid_bdev1", 00:29:39.575 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:39.575 "strip_size_kb": 0, 00:29:39.575 "state": "online", 00:29:39.575 "raid_level": "raid1", 00:29:39.575 "superblock": true, 00:29:39.575 "num_base_bdevs": 2, 00:29:39.575 "num_base_bdevs_discovered": 2, 00:29:39.575 "num_base_bdevs_operational": 2, 00:29:39.575 "process": { 00:29:39.575 "type": "rebuild", 00:29:39.575 "target": "spare", 00:29:39.575 "progress": { 00:29:39.575 "blocks": 14336, 00:29:39.575 "percent": 22 00:29:39.575 } 00:29:39.575 }, 00:29:39.575 "base_bdevs_list": [ 00:29:39.575 { 00:29:39.575 "name": "spare", 00:29:39.575 "uuid": "073df4e9-712e-5023-86b1-d79f6ab1cbed", 00:29:39.575 "is_configured": true, 00:29:39.575 "data_offset": 2048, 00:29:39.575 "data_size": 63488 00:29:39.575 }, 00:29:39.575 { 00:29:39.575 "name": "BaseBdev2", 00:29:39.575 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:39.575 "is_configured": true, 00:29:39.575 "data_offset": 2048, 00:29:39.575 "data_size": 63488 00:29:39.575 } 00:29:39.575 ] 00:29:39.575 }' 00:29:39.575 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:39.575 [2024-10-28 13:40:53.520288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:39.575 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:39.575 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:39.575 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:39.575 13:40:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:39.833 [2024-10-28 13:40:53.791396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:39.833 140.00 IOPS, 420.00 MiB/s [2024-10-28T13:40:53.993Z] [2024-10-28 13:40:53.926078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:40.399 [2024-10-28 13:40:54.512502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:40.399 [2024-10-28 13:40:54.513119] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:40.658 "name": "raid_bdev1", 00:29:40.658 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:40.658 "strip_size_kb": 0, 00:29:40.658 "state": "online", 00:29:40.658 "raid_level": "raid1", 00:29:40.658 "superblock": true, 00:29:40.658 "num_base_bdevs": 2, 00:29:40.658 "num_base_bdevs_discovered": 2, 00:29:40.658 "num_base_bdevs_operational": 2, 00:29:40.658 "process": { 00:29:40.658 "type": "rebuild", 00:29:40.658 "target": "spare", 00:29:40.658 "progress": { 00:29:40.658 "blocks": 32768, 00:29:40.658 "percent": 51 00:29:40.658 } 00:29:40.658 }, 00:29:40.658 "base_bdevs_list": [ 00:29:40.658 { 00:29:40.658 "name": "spare", 00:29:40.658 "uuid": "073df4e9-712e-5023-86b1-d79f6ab1cbed", 00:29:40.658 "is_configured": true, 00:29:40.658 "data_offset": 2048, 00:29:40.658 "data_size": 63488 00:29:40.658 }, 00:29:40.658 { 00:29:40.658 "name": "BaseBdev2", 00:29:40.658 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:40.658 "is_configured": true, 00:29:40.658 "data_offset": 2048, 00:29:40.658 "data_size": 63488 00:29:40.658 } 00:29:40.658 ] 00:29:40.658 }' 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:40.658 [2024-10-28 13:40:54.741570] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:40.658 [2024-10-28 13:40:54.741876] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:40.658 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:40.917 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:40.917 13:40:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:40.917 120.80 IOPS, 362.40 MiB/s [2024-10-28T13:40:55.077Z] [2024-10-28 13:40:55.069786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:29:40.917 [2024-10-28 13:40:55.070322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.877 107.50 IOPS, 322.50 MiB/s [2024-10-28T13:40:56.037Z] 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:41.877 "name": "raid_bdev1", 00:29:41.877 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:41.877 "strip_size_kb": 0, 00:29:41.877 "state": "online", 00:29:41.877 "raid_level": "raid1", 00:29:41.877 "superblock": true, 00:29:41.877 "num_base_bdevs": 2, 00:29:41.877 "num_base_bdevs_discovered": 2, 00:29:41.877 "num_base_bdevs_operational": 2, 00:29:41.877 "process": { 00:29:41.877 "type": "rebuild", 00:29:41.877 "target": "spare", 00:29:41.877 "progress": { 00:29:41.877 "blocks": 51200, 00:29:41.877 "percent": 80 00:29:41.877 } 00:29:41.877 }, 00:29:41.877 "base_bdevs_list": [ 00:29:41.877 { 00:29:41.877 "name": "spare", 00:29:41.877 "uuid": "073df4e9-712e-5023-86b1-d79f6ab1cbed", 00:29:41.877 "is_configured": true, 00:29:41.877 "data_offset": 2048, 00:29:41.877 "data_size": 63488 00:29:41.877 }, 00:29:41.877 { 00:29:41.877 "name": "BaseBdev2", 00:29:41.877 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:41.877 "is_configured": true, 00:29:41.877 "data_offset": 2048, 00:29:41.877 "data_size": 63488 00:29:41.877 } 00:29:41.877 ] 00:29:41.877 }' 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:41.877 13:40:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:42.444 [2024-10-28 13:40:56.441533] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:42.444 [2024-10-28 13:40:56.541591] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:42.444 [2024-10-28 13:40:56.543691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:42.961 96.71 IOPS, 290.14 MiB/s [2024-10-28T13:40:57.121Z] 13:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:42.961 13:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:42.961 13:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:42.961 13:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:42.961 13:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:42.961 13:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:42.961 13:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:42.961 13:40:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.961 13:40:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.961 13:40:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:42.961 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.961 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:42.961 "name": "raid_bdev1", 00:29:42.961 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:42.961 "strip_size_kb": 0, 00:29:42.961 "state": "online", 00:29:42.961 "raid_level": "raid1", 00:29:42.961 "superblock": true, 00:29:42.961 "num_base_bdevs": 2, 00:29:42.961 "num_base_bdevs_discovered": 2, 00:29:42.961 "num_base_bdevs_operational": 2, 00:29:42.961 "base_bdevs_list": [ 00:29:42.961 { 00:29:42.961 "name": "spare", 00:29:42.961 "uuid": "073df4e9-712e-5023-86b1-d79f6ab1cbed", 00:29:42.961 "is_configured": true, 00:29:42.961 "data_offset": 2048, 00:29:42.961 "data_size": 63488 00:29:42.961 }, 00:29:42.961 { 00:29:42.961 "name": "BaseBdev2", 00:29:42.961 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:42.961 "is_configured": true, 00:29:42.961 "data_offset": 2048, 00:29:42.961 "data_size": 63488 00:29:42.961 } 00:29:42.961 ] 00:29:42.961 }' 00:29:42.961 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:42.961 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:42.961 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:43.220 "name": "raid_bdev1", 00:29:43.220 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:43.220 "strip_size_kb": 0, 00:29:43.220 "state": "online", 00:29:43.220 "raid_level": "raid1", 00:29:43.220 "superblock": true, 00:29:43.220 "num_base_bdevs": 2, 00:29:43.220 "num_base_bdevs_discovered": 2, 00:29:43.220 "num_base_bdevs_operational": 2, 00:29:43.220 "base_bdevs_list": [ 00:29:43.220 { 00:29:43.220 "name": "spare", 00:29:43.220 "uuid": "073df4e9-712e-5023-86b1-d79f6ab1cbed", 00:29:43.220 "is_configured": true, 00:29:43.220 "data_offset": 2048, 00:29:43.220 "data_size": 63488 00:29:43.220 }, 00:29:43.220 { 00:29:43.220 "name": "BaseBdev2", 00:29:43.220 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:43.220 "is_configured": true, 00:29:43.220 "data_offset": 2048, 00:29:43.220 "data_size": 63488 00:29:43.220 } 00:29:43.220 ] 00:29:43.220 }' 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.220 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.478 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:43.478 "name": "raid_bdev1", 00:29:43.478 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:43.478 "strip_size_kb": 0, 00:29:43.478 "state": "online", 00:29:43.478 "raid_level": "raid1", 00:29:43.478 "superblock": true, 00:29:43.478 "num_base_bdevs": 2, 00:29:43.478 "num_base_bdevs_discovered": 2, 00:29:43.478 "num_base_bdevs_operational": 2, 00:29:43.478 "base_bdevs_list": [ 00:29:43.478 { 00:29:43.478 "name": "spare", 00:29:43.478 "uuid": "073df4e9-712e-5023-86b1-d79f6ab1cbed", 00:29:43.478 "is_configured": true, 00:29:43.478 "data_offset": 2048, 00:29:43.478 "data_size": 63488 00:29:43.478 }, 00:29:43.478 { 00:29:43.478 "name": "BaseBdev2", 00:29:43.478 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:43.478 "is_configured": true, 00:29:43.478 "data_offset": 2048, 00:29:43.478 "data_size": 63488 00:29:43.478 } 00:29:43.478 ] 00:29:43.478 }' 00:29:43.478 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:43.478 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:43.994 88.88 IOPS, 266.62 MiB/s [2024-10-28T13:40:58.154Z] 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:43.994 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.994 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:43.994 [2024-10-28 13:40:57.902517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:43.994 [2024-10-28 13:40:57.902559] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:43.994 00:29:43.994 Latency(us) 00:29:43.994 [2024-10-28T13:40:58.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.994 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:43.994 raid_bdev1 : 8.13 88.18 264.53 0.00 0.00 14690.12 281.13 111530.36 00:29:43.994 [2024-10-28T13:40:58.154Z] =================================================================================================================== 00:29:43.994 [2024-10-28T13:40:58.154Z] Total : 88.18 264.53 0.00 0.00 14690.12 281.13 111530.36 00:29:43.994 { 00:29:43.994 "results": [ 00:29:43.994 { 00:29:43.994 "job": "raid_bdev1", 00:29:43.994 "core_mask": "0x1", 00:29:43.994 "workload": "randrw", 00:29:43.994 "percentage": 50, 00:29:43.994 "status": "finished", 00:29:43.994 "queue_depth": 2, 00:29:43.994 "io_size": 3145728, 00:29:43.994 "runtime": 8.131263, 00:29:43.994 "iops": 88.17818338922255, 00:29:43.994 "mibps": 264.53455016766765, 00:29:43.994 "io_failed": 0, 00:29:43.994 "io_timeout": 0, 00:29:43.994 "avg_latency_us": 14690.120674527705, 00:29:43.994 "min_latency_us": 281.13454545454545, 00:29:43.994 "max_latency_us": 111530.35636363637 00:29:43.994 } 00:29:43.994 ], 00:29:43.994 "core_count": 1 00:29:43.994 } 00:29:43.994 [2024-10-28 13:40:57.979594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:43.994 [2024-10-28 13:40:57.979652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:43.994 [2024-10-28 13:40:57.979770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:43.994 [2024-10-28 13:40:57.979788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:29:43.994 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.994 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:43.994 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:29:43.994 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.994 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:43.994 13:40:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.994 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:29:43.994 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:43.994 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:29:43.994 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:29:43.994 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:43.994 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:43.994 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:43.994 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:43.994 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:43.994 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:43.994 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:43.994 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:43.994 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:29:44.252 /dev/nbd0 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:44.252 1+0 records in 00:29:44.252 1+0 records out 00:29:44.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354352 s, 11.6 MB/s 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:44.252 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:29:44.818 /dev/nbd1 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:44.818 1+0 records in 00:29:44.818 1+0 records out 00:29:44.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491953 s, 8.3 MB/s 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:44.818 13:40:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:45.117 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:45.375 [2024-10-28 13:40:59.428869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:45.375 [2024-10-28 13:40:59.428932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:45.375 [2024-10-28 13:40:59.428968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:29:45.375 [2024-10-28 13:40:59.428983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:45.375 [2024-10-28 13:40:59.432083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:45.375 [2024-10-28 13:40:59.432128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:45.375 [2024-10-28 13:40:59.432264] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:45.375 [2024-10-28 13:40:59.432322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:45.375 [2024-10-28 13:40:59.432505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:45.375 spare 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.375 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:45.375 [2024-10-28 13:40:59.532667] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:45.375 [2024-10-28 13:40:59.532718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:45.375 [2024-10-28 13:40:59.533174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:29:45.375 [2024-10-28 13:40:59.533428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:45.375 [2024-10-28 13:40:59.533447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:45.633 [2024-10-28 13:40:59.533664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:45.633 "name": "raid_bdev1", 00:29:45.633 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:45.633 "strip_size_kb": 0, 00:29:45.633 "state": "online", 00:29:45.633 "raid_level": "raid1", 00:29:45.633 "superblock": true, 00:29:45.633 "num_base_bdevs": 2, 00:29:45.633 "num_base_bdevs_discovered": 2, 00:29:45.633 "num_base_bdevs_operational": 2, 00:29:45.633 "base_bdevs_list": [ 00:29:45.633 { 00:29:45.633 "name": "spare", 00:29:45.633 "uuid": "073df4e9-712e-5023-86b1-d79f6ab1cbed", 00:29:45.633 "is_configured": true, 00:29:45.633 "data_offset": 2048, 00:29:45.633 "data_size": 63488 00:29:45.633 }, 00:29:45.633 { 00:29:45.633 "name": "BaseBdev2", 00:29:45.633 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:45.633 "is_configured": true, 00:29:45.633 "data_offset": 2048, 00:29:45.633 "data_size": 63488 00:29:45.633 } 00:29:45.633 ] 00:29:45.633 }' 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:45.633 13:40:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:45.892 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:45.892 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:45.892 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:45.892 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:45.892 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:45.892 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.892 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:45.892 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.892 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:46.149 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:46.150 "name": "raid_bdev1", 00:29:46.150 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:46.150 "strip_size_kb": 0, 00:29:46.150 "state": "online", 00:29:46.150 "raid_level": "raid1", 00:29:46.150 "superblock": true, 00:29:46.150 "num_base_bdevs": 2, 00:29:46.150 "num_base_bdevs_discovered": 2, 00:29:46.150 "num_base_bdevs_operational": 2, 00:29:46.150 "base_bdevs_list": [ 00:29:46.150 { 00:29:46.150 "name": "spare", 00:29:46.150 "uuid": "073df4e9-712e-5023-86b1-d79f6ab1cbed", 00:29:46.150 "is_configured": true, 00:29:46.150 "data_offset": 2048, 00:29:46.150 "data_size": 63488 00:29:46.150 }, 00:29:46.150 { 00:29:46.150 "name": "BaseBdev2", 00:29:46.150 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:46.150 "is_configured": true, 00:29:46.150 "data_offset": 2048, 00:29:46.150 "data_size": 63488 00:29:46.150 } 00:29:46.150 ] 00:29:46.150 }' 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:46.150 [2024-10-28 13:41:00.238026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:46.150 "name": "raid_bdev1", 00:29:46.150 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:46.150 "strip_size_kb": 0, 00:29:46.150 "state": "online", 00:29:46.150 "raid_level": "raid1", 00:29:46.150 "superblock": true, 00:29:46.150 "num_base_bdevs": 2, 00:29:46.150 "num_base_bdevs_discovered": 1, 00:29:46.150 "num_base_bdevs_operational": 1, 00:29:46.150 "base_bdevs_list": [ 00:29:46.150 { 00:29:46.150 "name": null, 00:29:46.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.150 "is_configured": false, 00:29:46.150 "data_offset": 0, 00:29:46.150 "data_size": 63488 00:29:46.150 }, 00:29:46.150 { 00:29:46.150 "name": "BaseBdev2", 00:29:46.150 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:46.150 "is_configured": true, 00:29:46.150 "data_offset": 2048, 00:29:46.150 "data_size": 63488 00:29:46.150 } 00:29:46.150 ] 00:29:46.150 }' 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:46.150 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:46.715 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:46.715 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.715 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:46.715 [2024-10-28 13:41:00.742362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:46.715 [2024-10-28 13:41:00.742708] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:46.715 [2024-10-28 13:41:00.742741] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:46.715 [2024-10-28 13:41:00.742789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:46.715 [2024-10-28 13:41:00.750214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:29:46.715 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.715 13:41:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:29:46.715 [2024-10-28 13:41:00.752995] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:47.715 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:47.715 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:47.715 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:47.715 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:47.715 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:47.715 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:47.715 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.715 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.715 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:47.715 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.715 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:47.715 "name": "raid_bdev1", 00:29:47.715 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:47.715 "strip_size_kb": 0, 00:29:47.715 "state": "online", 00:29:47.715 "raid_level": "raid1", 00:29:47.715 "superblock": true, 00:29:47.715 "num_base_bdevs": 2, 00:29:47.715 "num_base_bdevs_discovered": 2, 00:29:47.715 "num_base_bdevs_operational": 2, 00:29:47.715 "process": { 00:29:47.715 "type": "rebuild", 00:29:47.715 "target": "spare", 00:29:47.715 "progress": { 00:29:47.715 "blocks": 20480, 00:29:47.715 "percent": 32 00:29:47.715 } 00:29:47.715 }, 00:29:47.715 "base_bdevs_list": [ 00:29:47.715 { 00:29:47.715 "name": "spare", 00:29:47.715 "uuid": "073df4e9-712e-5023-86b1-d79f6ab1cbed", 00:29:47.715 "is_configured": true, 00:29:47.715 "data_offset": 2048, 00:29:47.715 "data_size": 63488 00:29:47.715 }, 00:29:47.715 { 00:29:47.715 "name": "BaseBdev2", 00:29:47.715 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:47.715 "is_configured": true, 00:29:47.715 "data_offset": 2048, 00:29:47.715 "data_size": 63488 00:29:47.715 } 00:29:47.715 ] 00:29:47.715 }' 00:29:47.715 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:47.715 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:47.715 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:47.973 [2024-10-28 13:41:01.923429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:47.973 [2024-10-28 13:41:01.961878] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:47.973 [2024-10-28 13:41:01.961957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:47.973 [2024-10-28 13:41:01.961979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:47.973 [2024-10-28 13:41:01.961994] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.973 13:41:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.973 13:41:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:47.973 "name": "raid_bdev1", 00:29:47.973 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:47.973 "strip_size_kb": 0, 00:29:47.973 "state": "online", 00:29:47.973 "raid_level": "raid1", 00:29:47.973 "superblock": true, 00:29:47.973 "num_base_bdevs": 2, 00:29:47.973 "num_base_bdevs_discovered": 1, 00:29:47.973 "num_base_bdevs_operational": 1, 00:29:47.973 "base_bdevs_list": [ 00:29:47.973 { 00:29:47.973 "name": null, 00:29:47.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.973 "is_configured": false, 00:29:47.973 "data_offset": 0, 00:29:47.973 "data_size": 63488 00:29:47.973 }, 00:29:47.973 { 00:29:47.973 "name": "BaseBdev2", 00:29:47.973 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:47.973 "is_configured": true, 00:29:47.973 "data_offset": 2048, 00:29:47.973 "data_size": 63488 00:29:47.973 } 00:29:47.973 ] 00:29:47.973 }' 00:29:47.973 13:41:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:47.973 13:41:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:48.540 13:41:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:48.540 13:41:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.540 13:41:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:48.540 [2024-10-28 13:41:02.501052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:48.540 [2024-10-28 13:41:02.501258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:48.540 [2024-10-28 13:41:02.501314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:29:48.540 [2024-10-28 13:41:02.501344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:48.540 [2024-10-28 13:41:02.502197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:48.540 [2024-10-28 13:41:02.502261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:48.540 [2024-10-28 13:41:02.502434] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:48.540 [2024-10-28 13:41:02.502484] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:48.540 [2024-10-28 13:41:02.502506] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:48.540 [2024-10-28 13:41:02.502567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:48.540 [2024-10-28 13:41:02.512828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:29:48.540 spare 00:29:48.540 13:41:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.540 13:41:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:29:48.540 [2024-10-28 13:41:02.516224] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:49.479 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:49.479 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:49.479 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:49.479 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:49.479 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:49.479 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:49.479 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.479 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.479 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:49.479 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.479 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:49.479 "name": "raid_bdev1", 00:29:49.479 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:49.479 "strip_size_kb": 0, 00:29:49.479 "state": "online", 00:29:49.479 "raid_level": "raid1", 00:29:49.479 "superblock": true, 00:29:49.479 "num_base_bdevs": 2, 00:29:49.479 "num_base_bdevs_discovered": 2, 00:29:49.479 "num_base_bdevs_operational": 2, 00:29:49.479 "process": { 00:29:49.479 "type": "rebuild", 00:29:49.479 "target": "spare", 00:29:49.479 "progress": { 00:29:49.479 "blocks": 20480, 00:29:49.479 "percent": 32 00:29:49.479 } 00:29:49.479 }, 00:29:49.479 "base_bdevs_list": [ 00:29:49.479 { 00:29:49.479 "name": "spare", 00:29:49.479 "uuid": "073df4e9-712e-5023-86b1-d79f6ab1cbed", 00:29:49.479 "is_configured": true, 00:29:49.479 "data_offset": 2048, 00:29:49.479 "data_size": 63488 00:29:49.479 }, 00:29:49.479 { 00:29:49.479 "name": "BaseBdev2", 00:29:49.479 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:49.479 "is_configured": true, 00:29:49.479 "data_offset": 2048, 00:29:49.479 "data_size": 63488 00:29:49.479 } 00:29:49.479 ] 00:29:49.479 }' 00:29:49.479 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:49.479 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:49.479 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:49.737 [2024-10-28 13:41:03.686301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:49.737 [2024-10-28 13:41:03.728241] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:49.737 [2024-10-28 13:41:03.728365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:49.737 [2024-10-28 13:41:03.728396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:49.737 [2024-10-28 13:41:03.728409] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.737 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:49.737 "name": "raid_bdev1", 00:29:49.737 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:49.737 "strip_size_kb": 0, 00:29:49.737 "state": "online", 00:29:49.737 "raid_level": "raid1", 00:29:49.737 "superblock": true, 00:29:49.737 "num_base_bdevs": 2, 00:29:49.738 "num_base_bdevs_discovered": 1, 00:29:49.738 "num_base_bdevs_operational": 1, 00:29:49.738 "base_bdevs_list": [ 00:29:49.738 { 00:29:49.738 "name": null, 00:29:49.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.738 "is_configured": false, 00:29:49.738 "data_offset": 0, 00:29:49.738 "data_size": 63488 00:29:49.738 }, 00:29:49.738 { 00:29:49.738 "name": "BaseBdev2", 00:29:49.738 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:49.738 "is_configured": true, 00:29:49.738 "data_offset": 2048, 00:29:49.738 "data_size": 63488 00:29:49.738 } 00:29:49.738 ] 00:29:49.738 }' 00:29:49.738 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:49.738 13:41:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:50.332 "name": "raid_bdev1", 00:29:50.332 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:50.332 "strip_size_kb": 0, 00:29:50.332 "state": "online", 00:29:50.332 "raid_level": "raid1", 00:29:50.332 "superblock": true, 00:29:50.332 "num_base_bdevs": 2, 00:29:50.332 "num_base_bdevs_discovered": 1, 00:29:50.332 "num_base_bdevs_operational": 1, 00:29:50.332 "base_bdevs_list": [ 00:29:50.332 { 00:29:50.332 "name": null, 00:29:50.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.332 "is_configured": false, 00:29:50.332 "data_offset": 0, 00:29:50.332 "data_size": 63488 00:29:50.332 }, 00:29:50.332 { 00:29:50.332 "name": "BaseBdev2", 00:29:50.332 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:50.332 "is_configured": true, 00:29:50.332 "data_offset": 2048, 00:29:50.332 "data_size": 63488 00:29:50.332 } 00:29:50.332 ] 00:29:50.332 }' 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:50.332 [2024-10-28 13:41:04.446184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:50.332 [2024-10-28 13:41:04.446319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:50.332 [2024-10-28 13:41:04.446393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:29:50.332 [2024-10-28 13:41:04.446410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:50.332 [2024-10-28 13:41:04.447057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:50.332 [2024-10-28 13:41:04.447106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:50.332 [2024-10-28 13:41:04.447240] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:50.332 [2024-10-28 13:41:04.447276] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:50.332 [2024-10-28 13:41:04.447308] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:50.332 [2024-10-28 13:41:04.447356] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:29:50.332 BaseBdev1 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.332 13:41:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:51.704 "name": "raid_bdev1", 00:29:51.704 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:51.704 "strip_size_kb": 0, 00:29:51.704 "state": "online", 00:29:51.704 "raid_level": "raid1", 00:29:51.704 "superblock": true, 00:29:51.704 "num_base_bdevs": 2, 00:29:51.704 "num_base_bdevs_discovered": 1, 00:29:51.704 "num_base_bdevs_operational": 1, 00:29:51.704 "base_bdevs_list": [ 00:29:51.704 { 00:29:51.704 "name": null, 00:29:51.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.704 "is_configured": false, 00:29:51.704 "data_offset": 0, 00:29:51.704 "data_size": 63488 00:29:51.704 }, 00:29:51.704 { 00:29:51.704 "name": "BaseBdev2", 00:29:51.704 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:51.704 "is_configured": true, 00:29:51.704 "data_offset": 2048, 00:29:51.704 "data_size": 63488 00:29:51.704 } 00:29:51.704 ] 00:29:51.704 }' 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:51.704 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:51.961 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:51.961 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:51.961 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:51.961 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:51.961 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:51.961 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:51.961 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.961 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.961 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:51.961 13:41:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.961 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:51.961 "name": "raid_bdev1", 00:29:51.961 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:51.961 "strip_size_kb": 0, 00:29:51.961 "state": "online", 00:29:51.961 "raid_level": "raid1", 00:29:51.961 "superblock": true, 00:29:51.961 "num_base_bdevs": 2, 00:29:51.961 "num_base_bdevs_discovered": 1, 00:29:51.961 "num_base_bdevs_operational": 1, 00:29:51.961 "base_bdevs_list": [ 00:29:51.961 { 00:29:51.961 "name": null, 00:29:51.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.961 "is_configured": false, 00:29:51.961 "data_offset": 0, 00:29:51.961 "data_size": 63488 00:29:51.961 }, 00:29:51.961 { 00:29:51.961 "name": "BaseBdev2", 00:29:51.961 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:51.961 "is_configured": true, 00:29:51.961 "data_offset": 2048, 00:29:51.961 "data_size": 63488 00:29:51.961 } 00:29:51.961 ] 00:29:51.961 }' 00:29:51.961 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:51.961 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:51.961 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:52.218 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:52.218 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:52.218 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:29:52.218 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:52.218 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:52.218 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:52.218 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:52.218 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:52.218 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:52.218 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.218 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:52.218 [2024-10-28 13:41:06.131042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:52.219 [2024-10-28 13:41:06.131353] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:52.219 [2024-10-28 13:41:06.131382] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:52.219 request: 00:29:52.219 { 00:29:52.219 "base_bdev": "BaseBdev1", 00:29:52.219 "raid_bdev": "raid_bdev1", 00:29:52.219 "method": "bdev_raid_add_base_bdev", 00:29:52.219 "req_id": 1 00:29:52.219 } 00:29:52.219 Got JSON-RPC error response 00:29:52.219 response: 00:29:52.219 { 00:29:52.219 "code": -22, 00:29:52.219 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:52.219 } 00:29:52.219 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:52.219 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:29:52.219 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:52.219 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:52.219 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:52.219 13:41:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:53.151 "name": "raid_bdev1", 00:29:53.151 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:53.151 "strip_size_kb": 0, 00:29:53.151 "state": "online", 00:29:53.151 "raid_level": "raid1", 00:29:53.151 "superblock": true, 00:29:53.151 "num_base_bdevs": 2, 00:29:53.151 "num_base_bdevs_discovered": 1, 00:29:53.151 "num_base_bdevs_operational": 1, 00:29:53.151 "base_bdevs_list": [ 00:29:53.151 { 00:29:53.151 "name": null, 00:29:53.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.151 "is_configured": false, 00:29:53.151 "data_offset": 0, 00:29:53.151 "data_size": 63488 00:29:53.151 }, 00:29:53.151 { 00:29:53.151 "name": "BaseBdev2", 00:29:53.151 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:53.151 "is_configured": true, 00:29:53.151 "data_offset": 2048, 00:29:53.151 "data_size": 63488 00:29:53.151 } 00:29:53.151 ] 00:29:53.151 }' 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:53.151 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:53.718 "name": "raid_bdev1", 00:29:53.718 "uuid": "25ffcfcd-27c1-4c3a-a9bd-f56d641fa28b", 00:29:53.718 "strip_size_kb": 0, 00:29:53.718 "state": "online", 00:29:53.718 "raid_level": "raid1", 00:29:53.718 "superblock": true, 00:29:53.718 "num_base_bdevs": 2, 00:29:53.718 "num_base_bdevs_discovered": 1, 00:29:53.718 "num_base_bdevs_operational": 1, 00:29:53.718 "base_bdevs_list": [ 00:29:53.718 { 00:29:53.718 "name": null, 00:29:53.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.718 "is_configured": false, 00:29:53.718 "data_offset": 0, 00:29:53.718 "data_size": 63488 00:29:53.718 }, 00:29:53.718 { 00:29:53.718 "name": "BaseBdev2", 00:29:53.718 "uuid": "74b6a4b5-8c30-5deb-a421-7e65c02aad53", 00:29:53.718 "is_configured": true, 00:29:53.718 "data_offset": 2048, 00:29:53.718 "data_size": 63488 00:29:53.718 } 00:29:53.718 ] 00:29:53.718 }' 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89554 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89554 ']' 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89554 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89554 00:29:53.718 killing process with pid 89554 00:29:53.718 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:53.719 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:53.719 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89554' 00:29:53.719 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89554 00:29:53.719 Received shutdown signal, test time was about 18.026841 seconds 00:29:53.719 00:29:53.719 Latency(us) 00:29:53.719 [2024-10-28T13:41:07.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.719 [2024-10-28T13:41:07.879Z] =================================================================================================================== 00:29:53.719 [2024-10-28T13:41:07.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:53.719 13:41:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89554 00:29:53.719 [2024-10-28 13:41:07.870265] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:53.719 [2024-10-28 13:41:07.870550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:53.719 [2024-10-28 13:41:07.870641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:53.719 [2024-10-28 13:41:07.870666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:53.977 [2024-10-28 13:41:07.913639] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:29:54.235 00:29:54.235 real 0m20.398s 00:29:54.235 user 0m28.409s 00:29:54.235 sys 0m1.984s 00:29:54.235 ************************************ 00:29:54.235 END TEST raid_rebuild_test_sb_io 00:29:54.235 ************************************ 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:54.235 13:41:08 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:29:54.235 13:41:08 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:29:54.235 13:41:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:54.235 13:41:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:54.235 13:41:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:54.235 ************************************ 00:29:54.235 START TEST raid_rebuild_test 00:29:54.235 ************************************ 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:29:54.235 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:29:54.236 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:29:54.236 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=90244 00:29:54.236 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 90244 00:29:54.236 13:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 90244 ']' 00:29:54.236 13:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.236 13:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:54.236 13:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:54.236 13:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.236 13:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:54.236 13:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.494 [2024-10-28 13:41:08.419940] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:29:54.494 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:54.494 Zero copy mechanism will not be used. 00:29:54.494 [2024-10-28 13:41:08.420412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90244 ] 00:29:54.494 [2024-10-28 13:41:08.577400] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:54.494 [2024-10-28 13:41:08.602765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.752 [2024-10-28 13:41:08.673461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.752 [2024-10-28 13:41:08.756806] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:54.752 [2024-10-28 13:41:08.757219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:55.317 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:55.317 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:29:55.317 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:55.317 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:55.317 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.317 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.317 BaseBdev1_malloc 00:29:55.317 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.317 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.318 [2024-10-28 13:41:09.427346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:55.318 [2024-10-28 13:41:09.427895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.318 [2024-10-28 13:41:09.427976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:55.318 [2024-10-28 13:41:09.428017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.318 [2024-10-28 13:41:09.431425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.318 [2024-10-28 13:41:09.431489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:55.318 BaseBdev1 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.318 BaseBdev2_malloc 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.318 [2024-10-28 13:41:09.457475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:55.318 [2024-10-28 13:41:09.457589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.318 [2024-10-28 13:41:09.457623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:55.318 [2024-10-28 13:41:09.457644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.318 [2024-10-28 13:41:09.460844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.318 [2024-10-28 13:41:09.460899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:55.318 BaseBdev2 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.318 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.587 BaseBdev3_malloc 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.587 [2024-10-28 13:41:09.489106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:55.587 [2024-10-28 13:41:09.489217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.587 [2024-10-28 13:41:09.489254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:55.587 [2024-10-28 13:41:09.489276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.587 [2024-10-28 13:41:09.492072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.587 [2024-10-28 13:41:09.492510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:55.587 BaseBdev3 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.587 BaseBdev4_malloc 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.587 [2024-10-28 13:41:09.536808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:55.587 [2024-10-28 13:41:09.536915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.587 [2024-10-28 13:41:09.536949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:29:55.587 [2024-10-28 13:41:09.536988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.587 [2024-10-28 13:41:09.540187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.587 [2024-10-28 13:41:09.540263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:55.587 BaseBdev4 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.587 spare_malloc 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.587 spare_delay 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.587 [2024-10-28 13:41:09.580965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:55.587 [2024-10-28 13:41:09.581081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.587 [2024-10-28 13:41:09.581114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:29:55.587 [2024-10-28 13:41:09.581159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.587 [2024-10-28 13:41:09.584144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.587 [2024-10-28 13:41:09.584549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:55.587 spare 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.587 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.587 [2024-10-28 13:41:09.589216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:55.587 [2024-10-28 13:41:09.591864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:55.588 [2024-10-28 13:41:09.592183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:55.588 [2024-10-28 13:41:09.592282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:55.588 [2024-10-28 13:41:09.592463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:29:55.588 [2024-10-28 13:41:09.592488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:55.588 [2024-10-28 13:41:09.592846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:29:55.588 [2024-10-28 13:41:09.593110] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:29:55.588 [2024-10-28 13:41:09.593129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:29:55.588 [2024-10-28 13:41:09.593421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:55.588 "name": "raid_bdev1", 00:29:55.588 "uuid": "fe931ecd-6627-4e67-beca-9a164bd06b67", 00:29:55.588 "strip_size_kb": 0, 00:29:55.588 "state": "online", 00:29:55.588 "raid_level": "raid1", 00:29:55.588 "superblock": false, 00:29:55.588 "num_base_bdevs": 4, 00:29:55.588 "num_base_bdevs_discovered": 4, 00:29:55.588 "num_base_bdevs_operational": 4, 00:29:55.588 "base_bdevs_list": [ 00:29:55.588 { 00:29:55.588 "name": "BaseBdev1", 00:29:55.588 "uuid": "cc1d4766-9a46-5622-a4a3-dfa4f487c2d3", 00:29:55.588 "is_configured": true, 00:29:55.588 "data_offset": 0, 00:29:55.588 "data_size": 65536 00:29:55.588 }, 00:29:55.588 { 00:29:55.588 "name": "BaseBdev2", 00:29:55.588 "uuid": "6bfe1240-e461-5dec-839c-36edf35a7dca", 00:29:55.588 "is_configured": true, 00:29:55.588 "data_offset": 0, 00:29:55.588 "data_size": 65536 00:29:55.588 }, 00:29:55.588 { 00:29:55.588 "name": "BaseBdev3", 00:29:55.588 "uuid": "2383a452-5672-5200-b74b-5a970792efb9", 00:29:55.588 "is_configured": true, 00:29:55.588 "data_offset": 0, 00:29:55.588 "data_size": 65536 00:29:55.588 }, 00:29:55.588 { 00:29:55.588 "name": "BaseBdev4", 00:29:55.588 "uuid": "1901012a-9d4c-5a39-aa17-2b47676eeb3c", 00:29:55.588 "is_configured": true, 00:29:55.588 "data_offset": 0, 00:29:55.588 "data_size": 65536 00:29:55.588 } 00:29:55.588 ] 00:29:55.588 }' 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:55.588 13:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:56.163 [2024-10-28 13:41:10.138092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:56.163 13:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:56.420 [2024-10-28 13:41:10.477931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:29:56.420 /dev/nbd0 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:56.420 1+0 records in 00:29:56.420 1+0 records out 00:29:56.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444797 s, 9.2 MB/s 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:29:56.420 13:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:30:06.443 65536+0 records in 00:30:06.443 65536+0 records out 00:30:06.443 33554432 bytes (34 MB, 32 MiB) copied, 9.40272 s, 3.6 MB/s 00:30:06.443 13:41:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:30:06.443 13:41:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:06.443 13:41:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:06.443 13:41:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:06.443 13:41:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:30:06.443 13:41:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:06.443 13:41:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:06.443 [2024-10-28 13:41:20.223630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.443 [2024-10-28 13:41:20.259784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:06.443 "name": "raid_bdev1", 00:30:06.443 "uuid": "fe931ecd-6627-4e67-beca-9a164bd06b67", 00:30:06.443 "strip_size_kb": 0, 00:30:06.443 "state": "online", 00:30:06.443 "raid_level": "raid1", 00:30:06.443 "superblock": false, 00:30:06.443 "num_base_bdevs": 4, 00:30:06.443 "num_base_bdevs_discovered": 3, 00:30:06.443 "num_base_bdevs_operational": 3, 00:30:06.443 "base_bdevs_list": [ 00:30:06.443 { 00:30:06.443 "name": null, 00:30:06.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.443 "is_configured": false, 00:30:06.443 "data_offset": 0, 00:30:06.443 "data_size": 65536 00:30:06.443 }, 00:30:06.443 { 00:30:06.443 "name": "BaseBdev2", 00:30:06.443 "uuid": "6bfe1240-e461-5dec-839c-36edf35a7dca", 00:30:06.443 "is_configured": true, 00:30:06.443 "data_offset": 0, 00:30:06.443 "data_size": 65536 00:30:06.443 }, 00:30:06.443 { 00:30:06.443 "name": "BaseBdev3", 00:30:06.443 "uuid": "2383a452-5672-5200-b74b-5a970792efb9", 00:30:06.443 "is_configured": true, 00:30:06.443 "data_offset": 0, 00:30:06.443 "data_size": 65536 00:30:06.443 }, 00:30:06.443 { 00:30:06.443 "name": "BaseBdev4", 00:30:06.443 "uuid": "1901012a-9d4c-5a39-aa17-2b47676eeb3c", 00:30:06.443 "is_configured": true, 00:30:06.443 "data_offset": 0, 00:30:06.443 "data_size": 65536 00:30:06.443 } 00:30:06.443 ] 00:30:06.443 }' 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:06.443 13:41:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.701 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:06.701 13:41:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.701 13:41:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.701 [2024-10-28 13:41:20.768023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:06.701 [2024-10-28 13:41:20.776560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a180 00:30:06.701 13:41:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.701 13:41:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:30:06.701 [2024-10-28 13:41:20.779847] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:07.653 13:41:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:07.653 13:41:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:07.653 13:41:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:07.653 13:41:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:07.653 13:41:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:07.653 13:41:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.653 13:41:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.653 13:41:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.653 13:41:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:07.920 13:41:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.920 13:41:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:07.920 "name": "raid_bdev1", 00:30:07.920 "uuid": "fe931ecd-6627-4e67-beca-9a164bd06b67", 00:30:07.920 "strip_size_kb": 0, 00:30:07.920 "state": "online", 00:30:07.920 "raid_level": "raid1", 00:30:07.920 "superblock": false, 00:30:07.920 "num_base_bdevs": 4, 00:30:07.920 "num_base_bdevs_discovered": 4, 00:30:07.920 "num_base_bdevs_operational": 4, 00:30:07.920 "process": { 00:30:07.920 "type": "rebuild", 00:30:07.920 "target": "spare", 00:30:07.920 "progress": { 00:30:07.920 "blocks": 20480, 00:30:07.920 "percent": 31 00:30:07.920 } 00:30:07.920 }, 00:30:07.920 "base_bdevs_list": [ 00:30:07.920 { 00:30:07.920 "name": "spare", 00:30:07.920 "uuid": "ff28b0ef-611b-57ab-962c-1b7f7faec5b2", 00:30:07.920 "is_configured": true, 00:30:07.920 "data_offset": 0, 00:30:07.920 "data_size": 65536 00:30:07.920 }, 00:30:07.920 { 00:30:07.920 "name": "BaseBdev2", 00:30:07.920 "uuid": "6bfe1240-e461-5dec-839c-36edf35a7dca", 00:30:07.920 "is_configured": true, 00:30:07.920 "data_offset": 0, 00:30:07.920 "data_size": 65536 00:30:07.920 }, 00:30:07.920 { 00:30:07.920 "name": "BaseBdev3", 00:30:07.920 "uuid": "2383a452-5672-5200-b74b-5a970792efb9", 00:30:07.920 "is_configured": true, 00:30:07.920 "data_offset": 0, 00:30:07.920 "data_size": 65536 00:30:07.920 }, 00:30:07.920 { 00:30:07.920 "name": "BaseBdev4", 00:30:07.920 "uuid": "1901012a-9d4c-5a39-aa17-2b47676eeb3c", 00:30:07.920 "is_configured": true, 00:30:07.920 "data_offset": 0, 00:30:07.920 "data_size": 65536 00:30:07.920 } 00:30:07.920 ] 00:30:07.920 }' 00:30:07.920 13:41:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:07.920 13:41:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:07.920 13:41:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:07.920 13:41:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:07.920 13:41:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:07.920 13:41:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.920 13:41:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:07.920 [2024-10-28 13:41:21.962702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:07.920 [2024-10-28 13:41:21.993635] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:07.920 [2024-10-28 13:41:21.994098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:07.920 [2024-10-28 13:41:21.994155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:07.920 [2024-10-28 13:41:21.994184] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:07.920 "name": "raid_bdev1", 00:30:07.920 "uuid": "fe931ecd-6627-4e67-beca-9a164bd06b67", 00:30:07.920 "strip_size_kb": 0, 00:30:07.920 "state": "online", 00:30:07.920 "raid_level": "raid1", 00:30:07.920 "superblock": false, 00:30:07.920 "num_base_bdevs": 4, 00:30:07.920 "num_base_bdevs_discovered": 3, 00:30:07.920 "num_base_bdevs_operational": 3, 00:30:07.920 "base_bdevs_list": [ 00:30:07.920 { 00:30:07.920 "name": null, 00:30:07.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.920 "is_configured": false, 00:30:07.920 "data_offset": 0, 00:30:07.920 "data_size": 65536 00:30:07.920 }, 00:30:07.920 { 00:30:07.920 "name": "BaseBdev2", 00:30:07.920 "uuid": "6bfe1240-e461-5dec-839c-36edf35a7dca", 00:30:07.920 "is_configured": true, 00:30:07.920 "data_offset": 0, 00:30:07.920 "data_size": 65536 00:30:07.920 }, 00:30:07.920 { 00:30:07.920 "name": "BaseBdev3", 00:30:07.920 "uuid": "2383a452-5672-5200-b74b-5a970792efb9", 00:30:07.920 "is_configured": true, 00:30:07.920 "data_offset": 0, 00:30:07.920 "data_size": 65536 00:30:07.920 }, 00:30:07.920 { 00:30:07.920 "name": "BaseBdev4", 00:30:07.920 "uuid": "1901012a-9d4c-5a39-aa17-2b47676eeb3c", 00:30:07.920 "is_configured": true, 00:30:07.920 "data_offset": 0, 00:30:07.920 "data_size": 65536 00:30:07.920 } 00:30:07.920 ] 00:30:07.920 }' 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:07.920 13:41:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:08.486 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:08.486 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:08.486 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:08.486 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:08.486 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:08.486 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:08.486 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:08.486 13:41:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.486 13:41:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:08.486 13:41:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.486 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:08.486 "name": "raid_bdev1", 00:30:08.486 "uuid": "fe931ecd-6627-4e67-beca-9a164bd06b67", 00:30:08.486 "strip_size_kb": 0, 00:30:08.486 "state": "online", 00:30:08.486 "raid_level": "raid1", 00:30:08.486 "superblock": false, 00:30:08.486 "num_base_bdevs": 4, 00:30:08.486 "num_base_bdevs_discovered": 3, 00:30:08.486 "num_base_bdevs_operational": 3, 00:30:08.486 "base_bdevs_list": [ 00:30:08.486 { 00:30:08.486 "name": null, 00:30:08.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:08.486 "is_configured": false, 00:30:08.486 "data_offset": 0, 00:30:08.486 "data_size": 65536 00:30:08.486 }, 00:30:08.486 { 00:30:08.486 "name": "BaseBdev2", 00:30:08.486 "uuid": "6bfe1240-e461-5dec-839c-36edf35a7dca", 00:30:08.486 "is_configured": true, 00:30:08.486 "data_offset": 0, 00:30:08.486 "data_size": 65536 00:30:08.486 }, 00:30:08.486 { 00:30:08.486 "name": "BaseBdev3", 00:30:08.486 "uuid": "2383a452-5672-5200-b74b-5a970792efb9", 00:30:08.486 "is_configured": true, 00:30:08.486 "data_offset": 0, 00:30:08.486 "data_size": 65536 00:30:08.486 }, 00:30:08.486 { 00:30:08.486 "name": "BaseBdev4", 00:30:08.486 "uuid": "1901012a-9d4c-5a39-aa17-2b47676eeb3c", 00:30:08.486 "is_configured": true, 00:30:08.486 "data_offset": 0, 00:30:08.486 "data_size": 65536 00:30:08.486 } 00:30:08.486 ] 00:30:08.486 }' 00:30:08.486 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:08.744 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:08.744 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:08.744 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:08.744 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:08.744 13:41:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.744 13:41:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:08.744 [2024-10-28 13:41:22.706793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:08.744 [2024-10-28 13:41:22.715449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a250 00:30:08.744 13:41:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.744 13:41:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:30:08.744 [2024-10-28 13:41:22.718570] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:09.679 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:09.679 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:09.679 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:09.679 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:09.679 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:09.679 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:09.679 13:41:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.679 13:41:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:09.679 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.679 13:41:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.679 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:09.679 "name": "raid_bdev1", 00:30:09.679 "uuid": "fe931ecd-6627-4e67-beca-9a164bd06b67", 00:30:09.679 "strip_size_kb": 0, 00:30:09.679 "state": "online", 00:30:09.679 "raid_level": "raid1", 00:30:09.679 "superblock": false, 00:30:09.679 "num_base_bdevs": 4, 00:30:09.679 "num_base_bdevs_discovered": 4, 00:30:09.679 "num_base_bdevs_operational": 4, 00:30:09.679 "process": { 00:30:09.679 "type": "rebuild", 00:30:09.679 "target": "spare", 00:30:09.679 "progress": { 00:30:09.679 "blocks": 20480, 00:30:09.679 "percent": 31 00:30:09.679 } 00:30:09.679 }, 00:30:09.679 "base_bdevs_list": [ 00:30:09.679 { 00:30:09.679 "name": "spare", 00:30:09.679 "uuid": "ff28b0ef-611b-57ab-962c-1b7f7faec5b2", 00:30:09.679 "is_configured": true, 00:30:09.679 "data_offset": 0, 00:30:09.679 "data_size": 65536 00:30:09.679 }, 00:30:09.679 { 00:30:09.679 "name": "BaseBdev2", 00:30:09.679 "uuid": "6bfe1240-e461-5dec-839c-36edf35a7dca", 00:30:09.679 "is_configured": true, 00:30:09.679 "data_offset": 0, 00:30:09.679 "data_size": 65536 00:30:09.679 }, 00:30:09.679 { 00:30:09.679 "name": "BaseBdev3", 00:30:09.679 "uuid": "2383a452-5672-5200-b74b-5a970792efb9", 00:30:09.679 "is_configured": true, 00:30:09.679 "data_offset": 0, 00:30:09.679 "data_size": 65536 00:30:09.679 }, 00:30:09.679 { 00:30:09.679 "name": "BaseBdev4", 00:30:09.679 "uuid": "1901012a-9d4c-5a39-aa17-2b47676eeb3c", 00:30:09.679 "is_configured": true, 00:30:09.679 "data_offset": 0, 00:30:09.679 "data_size": 65536 00:30:09.679 } 00:30:09.679 ] 00:30:09.679 }' 00:30:09.679 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:09.679 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:09.679 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:09.938 [2024-10-28 13:41:23.868302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:09.938 [2024-10-28 13:41:23.930157] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0a250 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:09.938 "name": "raid_bdev1", 00:30:09.938 "uuid": "fe931ecd-6627-4e67-beca-9a164bd06b67", 00:30:09.938 "strip_size_kb": 0, 00:30:09.938 "state": "online", 00:30:09.938 "raid_level": "raid1", 00:30:09.938 "superblock": false, 00:30:09.938 "num_base_bdevs": 4, 00:30:09.938 "num_base_bdevs_discovered": 3, 00:30:09.938 "num_base_bdevs_operational": 3, 00:30:09.938 "process": { 00:30:09.938 "type": "rebuild", 00:30:09.938 "target": "spare", 00:30:09.938 "progress": { 00:30:09.938 "blocks": 24576, 00:30:09.938 "percent": 37 00:30:09.938 } 00:30:09.938 }, 00:30:09.938 "base_bdevs_list": [ 00:30:09.938 { 00:30:09.938 "name": "spare", 00:30:09.938 "uuid": "ff28b0ef-611b-57ab-962c-1b7f7faec5b2", 00:30:09.938 "is_configured": true, 00:30:09.938 "data_offset": 0, 00:30:09.938 "data_size": 65536 00:30:09.938 }, 00:30:09.938 { 00:30:09.938 "name": null, 00:30:09.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.938 "is_configured": false, 00:30:09.938 "data_offset": 0, 00:30:09.938 "data_size": 65536 00:30:09.938 }, 00:30:09.938 { 00:30:09.938 "name": "BaseBdev3", 00:30:09.938 "uuid": "2383a452-5672-5200-b74b-5a970792efb9", 00:30:09.938 "is_configured": true, 00:30:09.938 "data_offset": 0, 00:30:09.938 "data_size": 65536 00:30:09.938 }, 00:30:09.938 { 00:30:09.938 "name": "BaseBdev4", 00:30:09.938 "uuid": "1901012a-9d4c-5a39-aa17-2b47676eeb3c", 00:30:09.938 "is_configured": true, 00:30:09.938 "data_offset": 0, 00:30:09.938 "data_size": 65536 00:30:09.938 } 00:30:09.938 ] 00:30:09.938 }' 00:30:09.938 13:41:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:09.938 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:09.938 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:09.938 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:09.938 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=418 00:30:09.938 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:09.938 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:09.938 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:09.938 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:09.938 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:09.938 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:09.938 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:09.938 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.938 13:41:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.938 13:41:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:10.196 13:41:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.196 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:10.196 "name": "raid_bdev1", 00:30:10.196 "uuid": "fe931ecd-6627-4e67-beca-9a164bd06b67", 00:30:10.196 "strip_size_kb": 0, 00:30:10.196 "state": "online", 00:30:10.196 "raid_level": "raid1", 00:30:10.196 "superblock": false, 00:30:10.196 "num_base_bdevs": 4, 00:30:10.196 "num_base_bdevs_discovered": 3, 00:30:10.196 "num_base_bdevs_operational": 3, 00:30:10.196 "process": { 00:30:10.196 "type": "rebuild", 00:30:10.196 "target": "spare", 00:30:10.196 "progress": { 00:30:10.196 "blocks": 26624, 00:30:10.196 "percent": 40 00:30:10.196 } 00:30:10.196 }, 00:30:10.196 "base_bdevs_list": [ 00:30:10.196 { 00:30:10.196 "name": "spare", 00:30:10.196 "uuid": "ff28b0ef-611b-57ab-962c-1b7f7faec5b2", 00:30:10.196 "is_configured": true, 00:30:10.196 "data_offset": 0, 00:30:10.196 "data_size": 65536 00:30:10.196 }, 00:30:10.196 { 00:30:10.196 "name": null, 00:30:10.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.196 "is_configured": false, 00:30:10.196 "data_offset": 0, 00:30:10.196 "data_size": 65536 00:30:10.196 }, 00:30:10.196 { 00:30:10.196 "name": "BaseBdev3", 00:30:10.196 "uuid": "2383a452-5672-5200-b74b-5a970792efb9", 00:30:10.196 "is_configured": true, 00:30:10.196 "data_offset": 0, 00:30:10.196 "data_size": 65536 00:30:10.196 }, 00:30:10.196 { 00:30:10.196 "name": "BaseBdev4", 00:30:10.196 "uuid": "1901012a-9d4c-5a39-aa17-2b47676eeb3c", 00:30:10.196 "is_configured": true, 00:30:10.196 "data_offset": 0, 00:30:10.196 "data_size": 65536 00:30:10.196 } 00:30:10.196 ] 00:30:10.196 }' 00:30:10.196 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:10.196 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:10.196 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:10.196 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:10.196 13:41:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:11.130 13:41:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:11.130 13:41:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:11.130 13:41:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:11.130 13:41:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:11.130 13:41:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:11.130 13:41:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:11.130 13:41:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:11.130 13:41:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.130 13:41:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.130 13:41:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:11.130 13:41:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.389 13:41:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:11.389 "name": "raid_bdev1", 00:30:11.389 "uuid": "fe931ecd-6627-4e67-beca-9a164bd06b67", 00:30:11.389 "strip_size_kb": 0, 00:30:11.389 "state": "online", 00:30:11.389 "raid_level": "raid1", 00:30:11.390 "superblock": false, 00:30:11.390 "num_base_bdevs": 4, 00:30:11.390 "num_base_bdevs_discovered": 3, 00:30:11.390 "num_base_bdevs_operational": 3, 00:30:11.390 "process": { 00:30:11.390 "type": "rebuild", 00:30:11.390 "target": "spare", 00:30:11.390 "progress": { 00:30:11.390 "blocks": 51200, 00:30:11.390 "percent": 78 00:30:11.390 } 00:30:11.390 }, 00:30:11.390 "base_bdevs_list": [ 00:30:11.390 { 00:30:11.390 "name": "spare", 00:30:11.390 "uuid": "ff28b0ef-611b-57ab-962c-1b7f7faec5b2", 00:30:11.390 "is_configured": true, 00:30:11.390 "data_offset": 0, 00:30:11.390 "data_size": 65536 00:30:11.390 }, 00:30:11.390 { 00:30:11.390 "name": null, 00:30:11.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:11.390 "is_configured": false, 00:30:11.390 "data_offset": 0, 00:30:11.390 "data_size": 65536 00:30:11.390 }, 00:30:11.390 { 00:30:11.390 "name": "BaseBdev3", 00:30:11.390 "uuid": "2383a452-5672-5200-b74b-5a970792efb9", 00:30:11.390 "is_configured": true, 00:30:11.390 "data_offset": 0, 00:30:11.390 "data_size": 65536 00:30:11.390 }, 00:30:11.390 { 00:30:11.390 "name": "BaseBdev4", 00:30:11.390 "uuid": "1901012a-9d4c-5a39-aa17-2b47676eeb3c", 00:30:11.390 "is_configured": true, 00:30:11.390 "data_offset": 0, 00:30:11.390 "data_size": 65536 00:30:11.390 } 00:30:11.390 ] 00:30:11.390 }' 00:30:11.390 13:41:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:11.390 13:41:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:11.390 13:41:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:11.390 13:41:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:11.390 13:41:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:11.961 [2024-10-28 13:41:25.950246] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:11.961 [2024-10-28 13:41:25.950472] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:11.961 [2024-10-28 13:41:25.950561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:12.527 "name": "raid_bdev1", 00:30:12.527 "uuid": "fe931ecd-6627-4e67-beca-9a164bd06b67", 00:30:12.527 "strip_size_kb": 0, 00:30:12.527 "state": "online", 00:30:12.527 "raid_level": "raid1", 00:30:12.527 "superblock": false, 00:30:12.527 "num_base_bdevs": 4, 00:30:12.527 "num_base_bdevs_discovered": 3, 00:30:12.527 "num_base_bdevs_operational": 3, 00:30:12.527 "base_bdevs_list": [ 00:30:12.527 { 00:30:12.527 "name": "spare", 00:30:12.527 "uuid": "ff28b0ef-611b-57ab-962c-1b7f7faec5b2", 00:30:12.527 "is_configured": true, 00:30:12.527 "data_offset": 0, 00:30:12.527 "data_size": 65536 00:30:12.527 }, 00:30:12.527 { 00:30:12.527 "name": null, 00:30:12.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.527 "is_configured": false, 00:30:12.527 "data_offset": 0, 00:30:12.527 "data_size": 65536 00:30:12.527 }, 00:30:12.527 { 00:30:12.527 "name": "BaseBdev3", 00:30:12.527 "uuid": "2383a452-5672-5200-b74b-5a970792efb9", 00:30:12.527 "is_configured": true, 00:30:12.527 "data_offset": 0, 00:30:12.527 "data_size": 65536 00:30:12.527 }, 00:30:12.527 { 00:30:12.527 "name": "BaseBdev4", 00:30:12.527 "uuid": "1901012a-9d4c-5a39-aa17-2b47676eeb3c", 00:30:12.527 "is_configured": true, 00:30:12.527 "data_offset": 0, 00:30:12.527 "data_size": 65536 00:30:12.527 } 00:30:12.527 ] 00:30:12.527 }' 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:12.527 "name": "raid_bdev1", 00:30:12.527 "uuid": "fe931ecd-6627-4e67-beca-9a164bd06b67", 00:30:12.527 "strip_size_kb": 0, 00:30:12.527 "state": "online", 00:30:12.527 "raid_level": "raid1", 00:30:12.527 "superblock": false, 00:30:12.527 "num_base_bdevs": 4, 00:30:12.527 "num_base_bdevs_discovered": 3, 00:30:12.527 "num_base_bdevs_operational": 3, 00:30:12.527 "base_bdevs_list": [ 00:30:12.527 { 00:30:12.527 "name": "spare", 00:30:12.527 "uuid": "ff28b0ef-611b-57ab-962c-1b7f7faec5b2", 00:30:12.527 "is_configured": true, 00:30:12.527 "data_offset": 0, 00:30:12.527 "data_size": 65536 00:30:12.527 }, 00:30:12.527 { 00:30:12.527 "name": null, 00:30:12.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.527 "is_configured": false, 00:30:12.527 "data_offset": 0, 00:30:12.527 "data_size": 65536 00:30:12.527 }, 00:30:12.527 { 00:30:12.527 "name": "BaseBdev3", 00:30:12.527 "uuid": "2383a452-5672-5200-b74b-5a970792efb9", 00:30:12.527 "is_configured": true, 00:30:12.527 "data_offset": 0, 00:30:12.527 "data_size": 65536 00:30:12.527 }, 00:30:12.527 { 00:30:12.527 "name": "BaseBdev4", 00:30:12.527 "uuid": "1901012a-9d4c-5a39-aa17-2b47676eeb3c", 00:30:12.527 "is_configured": true, 00:30:12.527 "data_offset": 0, 00:30:12.527 "data_size": 65536 00:30:12.527 } 00:30:12.527 ] 00:30:12.527 }' 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:12.527 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.785 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:12.785 "name": "raid_bdev1", 00:30:12.785 "uuid": "fe931ecd-6627-4e67-beca-9a164bd06b67", 00:30:12.785 "strip_size_kb": 0, 00:30:12.785 "state": "online", 00:30:12.785 "raid_level": "raid1", 00:30:12.785 "superblock": false, 00:30:12.785 "num_base_bdevs": 4, 00:30:12.785 "num_base_bdevs_discovered": 3, 00:30:12.785 "num_base_bdevs_operational": 3, 00:30:12.785 "base_bdevs_list": [ 00:30:12.785 { 00:30:12.785 "name": "spare", 00:30:12.785 "uuid": "ff28b0ef-611b-57ab-962c-1b7f7faec5b2", 00:30:12.785 "is_configured": true, 00:30:12.785 "data_offset": 0, 00:30:12.785 "data_size": 65536 00:30:12.785 }, 00:30:12.785 { 00:30:12.785 "name": null, 00:30:12.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.786 "is_configured": false, 00:30:12.786 "data_offset": 0, 00:30:12.786 "data_size": 65536 00:30:12.786 }, 00:30:12.786 { 00:30:12.786 "name": "BaseBdev3", 00:30:12.786 "uuid": "2383a452-5672-5200-b74b-5a970792efb9", 00:30:12.786 "is_configured": true, 00:30:12.786 "data_offset": 0, 00:30:12.786 "data_size": 65536 00:30:12.786 }, 00:30:12.786 { 00:30:12.786 "name": "BaseBdev4", 00:30:12.786 "uuid": "1901012a-9d4c-5a39-aa17-2b47676eeb3c", 00:30:12.786 "is_configured": true, 00:30:12.786 "data_offset": 0, 00:30:12.786 "data_size": 65536 00:30:12.786 } 00:30:12.786 ] 00:30:12.786 }' 00:30:12.786 13:41:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:12.786 13:41:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.353 [2024-10-28 13:41:27.214487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:13.353 [2024-10-28 13:41:27.214593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:13.353 [2024-10-28 13:41:27.214749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:13.353 [2024-10-28 13:41:27.214894] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:13.353 [2024-10-28 13:41:27.214916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:13.353 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:13.611 /dev/nbd0 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:13.611 1+0 records in 00:30:13.611 1+0 records out 00:30:13.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348283 s, 11.8 MB/s 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:13.611 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:30:13.869 /dev/nbd1 00:30:13.869 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:13.869 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:13.869 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:13.869 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:30:13.869 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:13.869 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:13.869 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:13.870 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:30:13.870 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:13.870 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:13.870 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:13.870 1+0 records in 00:30:13.870 1+0 records out 00:30:13.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273313 s, 15.0 MB/s 00:30:13.870 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:13.870 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:30:13.870 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:13.870 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:13.870 13:41:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:30:13.870 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:13.870 13:41:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:13.870 13:41:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:14.146 13:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:30:14.146 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:14.146 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:14.146 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:14.146 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:30:14.146 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:14.146 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:14.404 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:14.404 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:14.404 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:14.404 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:14.404 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:14.404 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:14.404 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:14.404 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:14.404 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:14.404 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 90244 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 90244 ']' 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 90244 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90244 00:30:14.663 killing process with pid 90244 00:30:14.663 Received shutdown signal, test time was about 60.000000 seconds 00:30:14.663 00:30:14.663 Latency(us) 00:30:14.663 [2024-10-28T13:41:28.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.663 [2024-10-28T13:41:28.823Z] =================================================================================================================== 00:30:14.663 [2024-10-28T13:41:28.823Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90244' 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 90244 00:30:14.663 [2024-10-28 13:41:28.791719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:14.663 13:41:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 90244 00:30:14.922 [2024-10-28 13:41:28.848007] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:15.181 ************************************ 00:30:15.181 END TEST raid_rebuild_test 00:30:15.181 ************************************ 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:30:15.181 00:30:15.181 real 0m20.793s 00:30:15.181 user 0m22.473s 00:30:15.181 sys 0m3.889s 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.181 13:41:29 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:30:15.181 13:41:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:30:15.181 13:41:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:15.181 13:41:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:15.181 ************************************ 00:30:15.181 START TEST raid_rebuild_test_sb 00:30:15.181 ************************************ 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:30:15.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=90725 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 90725 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 90725 ']' 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:15.181 13:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.181 [2024-10-28 13:41:29.259950] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:30:15.181 [2024-10-28 13:41:29.260333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90725 ] 00:30:15.181 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:15.181 Zero copy mechanism will not be used. 00:30:15.440 [2024-10-28 13:41:29.405569] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:15.440 [2024-10-28 13:41:29.435288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.440 [2024-10-28 13:41:29.489378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.440 [2024-10-28 13:41:29.547009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:15.440 [2024-10-28 13:41:29.547295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.379 BaseBdev1_malloc 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.379 [2024-10-28 13:41:30.314080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:16.379 [2024-10-28 13:41:30.314185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.379 [2024-10-28 13:41:30.314242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:16.379 [2024-10-28 13:41:30.314266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.379 [2024-10-28 13:41:30.317172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.379 [2024-10-28 13:41:30.317218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:16.379 BaseBdev1 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.379 BaseBdev2_malloc 00:30:16.379 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.380 [2024-10-28 13:41:30.341646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:16.380 [2024-10-28 13:41:30.341723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.380 [2024-10-28 13:41:30.341753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:16.380 [2024-10-28 13:41:30.341771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.380 [2024-10-28 13:41:30.344532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.380 [2024-10-28 13:41:30.344742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:16.380 BaseBdev2 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.380 BaseBdev3_malloc 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.380 [2024-10-28 13:41:30.369175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:16.380 [2024-10-28 13:41:30.369249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.380 [2024-10-28 13:41:30.369282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:16.380 [2024-10-28 13:41:30.369301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.380 [2024-10-28 13:41:30.372027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.380 [2024-10-28 13:41:30.372213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:16.380 BaseBdev3 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.380 BaseBdev4_malloc 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.380 [2024-10-28 13:41:30.409946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:16.380 [2024-10-28 13:41:30.410021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.380 [2024-10-28 13:41:30.410051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:16.380 [2024-10-28 13:41:30.410069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.380 [2024-10-28 13:41:30.412860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.380 [2024-10-28 13:41:30.413040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:16.380 BaseBdev4 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.380 spare_malloc 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.380 spare_delay 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.380 [2024-10-28 13:41:30.445608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:16.380 [2024-10-28 13:41:30.445690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.380 [2024-10-28 13:41:30.445721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:16.380 [2024-10-28 13:41:30.445738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.380 [2024-10-28 13:41:30.448484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.380 [2024-10-28 13:41:30.448553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:16.380 spare 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.380 [2024-10-28 13:41:30.457731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:16.380 [2024-10-28 13:41:30.460129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:16.380 [2024-10-28 13:41:30.460243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:16.380 [2024-10-28 13:41:30.460320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:16.380 [2024-10-28 13:41:30.460554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:30:16.380 [2024-10-28 13:41:30.460586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:16.380 [2024-10-28 13:41:30.460904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:16.380 [2024-10-28 13:41:30.461123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:30:16.380 [2024-10-28 13:41:30.461159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:30:16.380 [2024-10-28 13:41:30.461341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:16.380 "name": "raid_bdev1", 00:30:16.380 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:16.380 "strip_size_kb": 0, 00:30:16.380 "state": "online", 00:30:16.380 "raid_level": "raid1", 00:30:16.380 "superblock": true, 00:30:16.380 "num_base_bdevs": 4, 00:30:16.380 "num_base_bdevs_discovered": 4, 00:30:16.380 "num_base_bdevs_operational": 4, 00:30:16.380 "base_bdevs_list": [ 00:30:16.380 { 00:30:16.380 "name": "BaseBdev1", 00:30:16.380 "uuid": "6990ba8a-3638-5ee9-85dd-7e004e67ca50", 00:30:16.380 "is_configured": true, 00:30:16.380 "data_offset": 2048, 00:30:16.380 "data_size": 63488 00:30:16.380 }, 00:30:16.380 { 00:30:16.380 "name": "BaseBdev2", 00:30:16.380 "uuid": "2a12823d-abbc-5daf-8f8b-75c28f4f01e8", 00:30:16.380 "is_configured": true, 00:30:16.380 "data_offset": 2048, 00:30:16.380 "data_size": 63488 00:30:16.380 }, 00:30:16.380 { 00:30:16.380 "name": "BaseBdev3", 00:30:16.380 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:16.380 "is_configured": true, 00:30:16.380 "data_offset": 2048, 00:30:16.380 "data_size": 63488 00:30:16.380 }, 00:30:16.380 { 00:30:16.380 "name": "BaseBdev4", 00:30:16.380 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:16.380 "is_configured": true, 00:30:16.380 "data_offset": 2048, 00:30:16.380 "data_size": 63488 00:30:16.380 } 00:30:16.380 ] 00:30:16.380 }' 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:16.380 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.948 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:16.948 13:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:30:16.948 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.948 13:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.948 [2024-10-28 13:41:30.998229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:16.948 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:30:17.517 [2024-10-28 13:41:31.418015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:30:17.517 /dev/nbd0 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:17.517 1+0 records in 00:30:17.517 1+0 records out 00:30:17.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575258 s, 7.1 MB/s 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:30:17.517 13:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:30:25.628 63488+0 records in 00:30:25.628 63488+0 records out 00:30:25.628 32505856 bytes (33 MB, 31 MiB) copied, 7.86593 s, 4.1 MB/s 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:25.628 [2024-10-28 13:41:39.630626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:25.628 [2024-10-28 13:41:39.656309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:25.628 13:41:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:25.629 "name": "raid_bdev1", 00:30:25.629 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:25.629 "strip_size_kb": 0, 00:30:25.629 "state": "online", 00:30:25.629 "raid_level": "raid1", 00:30:25.629 "superblock": true, 00:30:25.629 "num_base_bdevs": 4, 00:30:25.629 "num_base_bdevs_discovered": 3, 00:30:25.629 "num_base_bdevs_operational": 3, 00:30:25.629 "base_bdevs_list": [ 00:30:25.629 { 00:30:25.629 "name": null, 00:30:25.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.629 "is_configured": false, 00:30:25.629 "data_offset": 0, 00:30:25.629 "data_size": 63488 00:30:25.629 }, 00:30:25.629 { 00:30:25.629 "name": "BaseBdev2", 00:30:25.629 "uuid": "2a12823d-abbc-5daf-8f8b-75c28f4f01e8", 00:30:25.629 "is_configured": true, 00:30:25.629 "data_offset": 2048, 00:30:25.629 "data_size": 63488 00:30:25.629 }, 00:30:25.629 { 00:30:25.629 "name": "BaseBdev3", 00:30:25.629 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:25.629 "is_configured": true, 00:30:25.629 "data_offset": 2048, 00:30:25.629 "data_size": 63488 00:30:25.629 }, 00:30:25.629 { 00:30:25.629 "name": "BaseBdev4", 00:30:25.629 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:25.629 "is_configured": true, 00:30:25.629 "data_offset": 2048, 00:30:25.629 "data_size": 63488 00:30:25.629 } 00:30:25.629 ] 00:30:25.629 }' 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:25.629 13:41:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:26.195 13:41:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:26.195 13:41:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.195 13:41:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:26.195 [2024-10-28 13:41:40.152561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:26.195 [2024-10-28 13:41:40.158961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3910 00:30:26.195 13:41:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.195 13:41:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:30:26.195 [2024-10-28 13:41:40.162315] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:27.130 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:27.130 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:27.130 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:27.130 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:27.130 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:27.130 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.130 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.130 13:41:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.130 13:41:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:27.130 13:41:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.130 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:27.130 "name": "raid_bdev1", 00:30:27.130 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:27.130 "strip_size_kb": 0, 00:30:27.130 "state": "online", 00:30:27.130 "raid_level": "raid1", 00:30:27.130 "superblock": true, 00:30:27.130 "num_base_bdevs": 4, 00:30:27.130 "num_base_bdevs_discovered": 4, 00:30:27.130 "num_base_bdevs_operational": 4, 00:30:27.130 "process": { 00:30:27.130 "type": "rebuild", 00:30:27.130 "target": "spare", 00:30:27.130 "progress": { 00:30:27.130 "blocks": 20480, 00:30:27.130 "percent": 32 00:30:27.130 } 00:30:27.130 }, 00:30:27.130 "base_bdevs_list": [ 00:30:27.130 { 00:30:27.130 "name": "spare", 00:30:27.130 "uuid": "7e4c4292-e82e-5890-b232-af7ae96c0b8a", 00:30:27.130 "is_configured": true, 00:30:27.130 "data_offset": 2048, 00:30:27.130 "data_size": 63488 00:30:27.130 }, 00:30:27.130 { 00:30:27.130 "name": "BaseBdev2", 00:30:27.130 "uuid": "2a12823d-abbc-5daf-8f8b-75c28f4f01e8", 00:30:27.130 "is_configured": true, 00:30:27.130 "data_offset": 2048, 00:30:27.130 "data_size": 63488 00:30:27.130 }, 00:30:27.130 { 00:30:27.130 "name": "BaseBdev3", 00:30:27.130 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:27.131 "is_configured": true, 00:30:27.131 "data_offset": 2048, 00:30:27.131 "data_size": 63488 00:30:27.131 }, 00:30:27.131 { 00:30:27.131 "name": "BaseBdev4", 00:30:27.131 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:27.131 "is_configured": true, 00:30:27.131 "data_offset": 2048, 00:30:27.131 "data_size": 63488 00:30:27.131 } 00:30:27.131 ] 00:30:27.131 }' 00:30:27.131 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:27.131 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:27.131 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:27.390 [2024-10-28 13:41:41.308021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:27.390 [2024-10-28 13:41:41.372076] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:27.390 [2024-10-28 13:41:41.372201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:27.390 [2024-10-28 13:41:41.372237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:27.390 [2024-10-28 13:41:41.372256] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.390 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:27.390 "name": "raid_bdev1", 00:30:27.390 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:27.390 "strip_size_kb": 0, 00:30:27.390 "state": "online", 00:30:27.390 "raid_level": "raid1", 00:30:27.390 "superblock": true, 00:30:27.390 "num_base_bdevs": 4, 00:30:27.390 "num_base_bdevs_discovered": 3, 00:30:27.390 "num_base_bdevs_operational": 3, 00:30:27.390 "base_bdevs_list": [ 00:30:27.390 { 00:30:27.390 "name": null, 00:30:27.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.390 "is_configured": false, 00:30:27.390 "data_offset": 0, 00:30:27.390 "data_size": 63488 00:30:27.390 }, 00:30:27.390 { 00:30:27.390 "name": "BaseBdev2", 00:30:27.390 "uuid": "2a12823d-abbc-5daf-8f8b-75c28f4f01e8", 00:30:27.391 "is_configured": true, 00:30:27.391 "data_offset": 2048, 00:30:27.391 "data_size": 63488 00:30:27.391 }, 00:30:27.391 { 00:30:27.391 "name": "BaseBdev3", 00:30:27.391 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:27.391 "is_configured": true, 00:30:27.391 "data_offset": 2048, 00:30:27.391 "data_size": 63488 00:30:27.391 }, 00:30:27.391 { 00:30:27.391 "name": "BaseBdev4", 00:30:27.391 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:27.391 "is_configured": true, 00:30:27.391 "data_offset": 2048, 00:30:27.391 "data_size": 63488 00:30:27.391 } 00:30:27.391 ] 00:30:27.391 }' 00:30:27.391 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:27.391 13:41:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:27.959 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:27.959 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:27.959 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:27.959 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:27.959 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:27.959 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.959 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.959 13:41:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.959 13:41:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:27.959 13:41:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.959 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:27.959 "name": "raid_bdev1", 00:30:27.959 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:27.959 "strip_size_kb": 0, 00:30:27.959 "state": "online", 00:30:27.959 "raid_level": "raid1", 00:30:27.959 "superblock": true, 00:30:27.959 "num_base_bdevs": 4, 00:30:27.959 "num_base_bdevs_discovered": 3, 00:30:27.959 "num_base_bdevs_operational": 3, 00:30:27.959 "base_bdevs_list": [ 00:30:27.959 { 00:30:27.959 "name": null, 00:30:27.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.959 "is_configured": false, 00:30:27.959 "data_offset": 0, 00:30:27.959 "data_size": 63488 00:30:27.959 }, 00:30:27.959 { 00:30:27.959 "name": "BaseBdev2", 00:30:27.959 "uuid": "2a12823d-abbc-5daf-8f8b-75c28f4f01e8", 00:30:27.959 "is_configured": true, 00:30:27.959 "data_offset": 2048, 00:30:27.959 "data_size": 63488 00:30:27.959 }, 00:30:27.959 { 00:30:27.959 "name": "BaseBdev3", 00:30:27.959 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:27.959 "is_configured": true, 00:30:27.959 "data_offset": 2048, 00:30:27.959 "data_size": 63488 00:30:27.959 }, 00:30:27.959 { 00:30:27.959 "name": "BaseBdev4", 00:30:27.959 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:27.959 "is_configured": true, 00:30:27.959 "data_offset": 2048, 00:30:27.959 "data_size": 63488 00:30:27.959 } 00:30:27.959 ] 00:30:27.959 }' 00:30:27.959 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:27.959 13:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:27.959 13:41:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:27.959 13:41:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:27.959 13:41:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:27.959 13:41:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.959 13:41:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:27.959 [2024-10-28 13:41:42.058524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:27.959 [2024-10-28 13:41:42.064133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca39e0 00:30:27.959 13:41:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.959 13:41:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:30:27.959 [2024-10-28 13:41:42.066829] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:29.336 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:29.336 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:29.336 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:29.336 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:29.336 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:29.336 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.336 13:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:29.337 "name": "raid_bdev1", 00:30:29.337 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:29.337 "strip_size_kb": 0, 00:30:29.337 "state": "online", 00:30:29.337 "raid_level": "raid1", 00:30:29.337 "superblock": true, 00:30:29.337 "num_base_bdevs": 4, 00:30:29.337 "num_base_bdevs_discovered": 4, 00:30:29.337 "num_base_bdevs_operational": 4, 00:30:29.337 "process": { 00:30:29.337 "type": "rebuild", 00:30:29.337 "target": "spare", 00:30:29.337 "progress": { 00:30:29.337 "blocks": 20480, 00:30:29.337 "percent": 32 00:30:29.337 } 00:30:29.337 }, 00:30:29.337 "base_bdevs_list": [ 00:30:29.337 { 00:30:29.337 "name": "spare", 00:30:29.337 "uuid": "7e4c4292-e82e-5890-b232-af7ae96c0b8a", 00:30:29.337 "is_configured": true, 00:30:29.337 "data_offset": 2048, 00:30:29.337 "data_size": 63488 00:30:29.337 }, 00:30:29.337 { 00:30:29.337 "name": "BaseBdev2", 00:30:29.337 "uuid": "2a12823d-abbc-5daf-8f8b-75c28f4f01e8", 00:30:29.337 "is_configured": true, 00:30:29.337 "data_offset": 2048, 00:30:29.337 "data_size": 63488 00:30:29.337 }, 00:30:29.337 { 00:30:29.337 "name": "BaseBdev3", 00:30:29.337 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:29.337 "is_configured": true, 00:30:29.337 "data_offset": 2048, 00:30:29.337 "data_size": 63488 00:30:29.337 }, 00:30:29.337 { 00:30:29.337 "name": "BaseBdev4", 00:30:29.337 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:29.337 "is_configured": true, 00:30:29.337 "data_offset": 2048, 00:30:29.337 "data_size": 63488 00:30:29.337 } 00:30:29.337 ] 00:30:29.337 }' 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:30:29.337 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:29.337 [2024-10-28 13:41:43.229058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:29.337 [2024-10-28 13:41:43.375842] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca39e0 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:29.337 "name": "raid_bdev1", 00:30:29.337 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:29.337 "strip_size_kb": 0, 00:30:29.337 "state": "online", 00:30:29.337 "raid_level": "raid1", 00:30:29.337 "superblock": true, 00:30:29.337 "num_base_bdevs": 4, 00:30:29.337 "num_base_bdevs_discovered": 3, 00:30:29.337 "num_base_bdevs_operational": 3, 00:30:29.337 "process": { 00:30:29.337 "type": "rebuild", 00:30:29.337 "target": "spare", 00:30:29.337 "progress": { 00:30:29.337 "blocks": 24576, 00:30:29.337 "percent": 38 00:30:29.337 } 00:30:29.337 }, 00:30:29.337 "base_bdevs_list": [ 00:30:29.337 { 00:30:29.337 "name": "spare", 00:30:29.337 "uuid": "7e4c4292-e82e-5890-b232-af7ae96c0b8a", 00:30:29.337 "is_configured": true, 00:30:29.337 "data_offset": 2048, 00:30:29.337 "data_size": 63488 00:30:29.337 }, 00:30:29.337 { 00:30:29.337 "name": null, 00:30:29.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.337 "is_configured": false, 00:30:29.337 "data_offset": 0, 00:30:29.337 "data_size": 63488 00:30:29.337 }, 00:30:29.337 { 00:30:29.337 "name": "BaseBdev3", 00:30:29.337 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:29.337 "is_configured": true, 00:30:29.337 "data_offset": 2048, 00:30:29.337 "data_size": 63488 00:30:29.337 }, 00:30:29.337 { 00:30:29.337 "name": "BaseBdev4", 00:30:29.337 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:29.337 "is_configured": true, 00:30:29.337 "data_offset": 2048, 00:30:29.337 "data_size": 63488 00:30:29.337 } 00:30:29.337 ] 00:30:29.337 }' 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:29.337 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=437 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.596 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:29.596 "name": "raid_bdev1", 00:30:29.596 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:29.596 "strip_size_kb": 0, 00:30:29.596 "state": "online", 00:30:29.596 "raid_level": "raid1", 00:30:29.596 "superblock": true, 00:30:29.596 "num_base_bdevs": 4, 00:30:29.596 "num_base_bdevs_discovered": 3, 00:30:29.596 "num_base_bdevs_operational": 3, 00:30:29.596 "process": { 00:30:29.596 "type": "rebuild", 00:30:29.596 "target": "spare", 00:30:29.596 "progress": { 00:30:29.596 "blocks": 26624, 00:30:29.596 "percent": 41 00:30:29.596 } 00:30:29.596 }, 00:30:29.596 "base_bdevs_list": [ 00:30:29.596 { 00:30:29.596 "name": "spare", 00:30:29.596 "uuid": "7e4c4292-e82e-5890-b232-af7ae96c0b8a", 00:30:29.596 "is_configured": true, 00:30:29.596 "data_offset": 2048, 00:30:29.596 "data_size": 63488 00:30:29.596 }, 00:30:29.596 { 00:30:29.596 "name": null, 00:30:29.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.596 "is_configured": false, 00:30:29.596 "data_offset": 0, 00:30:29.596 "data_size": 63488 00:30:29.596 }, 00:30:29.596 { 00:30:29.596 "name": "BaseBdev3", 00:30:29.597 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:29.597 "is_configured": true, 00:30:29.597 "data_offset": 2048, 00:30:29.597 "data_size": 63488 00:30:29.597 }, 00:30:29.597 { 00:30:29.597 "name": "BaseBdev4", 00:30:29.597 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:29.597 "is_configured": true, 00:30:29.597 "data_offset": 2048, 00:30:29.597 "data_size": 63488 00:30:29.597 } 00:30:29.597 ] 00:30:29.597 }' 00:30:29.597 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:29.597 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:29.597 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:29.597 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:29.597 13:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:30.974 13:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:30.974 13:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:30.974 13:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:30.974 13:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:30.974 13:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:30.974 13:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:30.974 13:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.974 13:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.974 13:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.974 13:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:30.974 13:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.974 13:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:30.974 "name": "raid_bdev1", 00:30:30.974 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:30.974 "strip_size_kb": 0, 00:30:30.974 "state": "online", 00:30:30.974 "raid_level": "raid1", 00:30:30.974 "superblock": true, 00:30:30.974 "num_base_bdevs": 4, 00:30:30.974 "num_base_bdevs_discovered": 3, 00:30:30.974 "num_base_bdevs_operational": 3, 00:30:30.974 "process": { 00:30:30.974 "type": "rebuild", 00:30:30.974 "target": "spare", 00:30:30.974 "progress": { 00:30:30.974 "blocks": 51200, 00:30:30.974 "percent": 80 00:30:30.974 } 00:30:30.974 }, 00:30:30.975 "base_bdevs_list": [ 00:30:30.975 { 00:30:30.975 "name": "spare", 00:30:30.975 "uuid": "7e4c4292-e82e-5890-b232-af7ae96c0b8a", 00:30:30.975 "is_configured": true, 00:30:30.975 "data_offset": 2048, 00:30:30.975 "data_size": 63488 00:30:30.975 }, 00:30:30.975 { 00:30:30.975 "name": null, 00:30:30.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.975 "is_configured": false, 00:30:30.975 "data_offset": 0, 00:30:30.975 "data_size": 63488 00:30:30.975 }, 00:30:30.975 { 00:30:30.975 "name": "BaseBdev3", 00:30:30.975 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:30.975 "is_configured": true, 00:30:30.975 "data_offset": 2048, 00:30:30.975 "data_size": 63488 00:30:30.975 }, 00:30:30.975 { 00:30:30.975 "name": "BaseBdev4", 00:30:30.975 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:30.975 "is_configured": true, 00:30:30.975 "data_offset": 2048, 00:30:30.975 "data_size": 63488 00:30:30.975 } 00:30:30.975 ] 00:30:30.975 }' 00:30:30.975 13:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:30.975 13:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:30.975 13:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:30.975 13:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:30.975 13:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:31.234 [2024-10-28 13:41:45.290441] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:31.234 [2024-10-28 13:41:45.290776] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:31.234 [2024-10-28 13:41:45.290972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:31.829 "name": "raid_bdev1", 00:30:31.829 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:31.829 "strip_size_kb": 0, 00:30:31.829 "state": "online", 00:30:31.829 "raid_level": "raid1", 00:30:31.829 "superblock": true, 00:30:31.829 "num_base_bdevs": 4, 00:30:31.829 "num_base_bdevs_discovered": 3, 00:30:31.829 "num_base_bdevs_operational": 3, 00:30:31.829 "base_bdevs_list": [ 00:30:31.829 { 00:30:31.829 "name": "spare", 00:30:31.829 "uuid": "7e4c4292-e82e-5890-b232-af7ae96c0b8a", 00:30:31.829 "is_configured": true, 00:30:31.829 "data_offset": 2048, 00:30:31.829 "data_size": 63488 00:30:31.829 }, 00:30:31.829 { 00:30:31.829 "name": null, 00:30:31.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.829 "is_configured": false, 00:30:31.829 "data_offset": 0, 00:30:31.829 "data_size": 63488 00:30:31.829 }, 00:30:31.829 { 00:30:31.829 "name": "BaseBdev3", 00:30:31.829 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:31.829 "is_configured": true, 00:30:31.829 "data_offset": 2048, 00:30:31.829 "data_size": 63488 00:30:31.829 }, 00:30:31.829 { 00:30:31.829 "name": "BaseBdev4", 00:30:31.829 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:31.829 "is_configured": true, 00:30:31.829 "data_offset": 2048, 00:30:31.829 "data_size": 63488 00:30:31.829 } 00:30:31.829 ] 00:30:31.829 }' 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:31.829 13:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:32.088 "name": "raid_bdev1", 00:30:32.088 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:32.088 "strip_size_kb": 0, 00:30:32.088 "state": "online", 00:30:32.088 "raid_level": "raid1", 00:30:32.088 "superblock": true, 00:30:32.088 "num_base_bdevs": 4, 00:30:32.088 "num_base_bdevs_discovered": 3, 00:30:32.088 "num_base_bdevs_operational": 3, 00:30:32.088 "base_bdevs_list": [ 00:30:32.088 { 00:30:32.088 "name": "spare", 00:30:32.088 "uuid": "7e4c4292-e82e-5890-b232-af7ae96c0b8a", 00:30:32.088 "is_configured": true, 00:30:32.088 "data_offset": 2048, 00:30:32.088 "data_size": 63488 00:30:32.088 }, 00:30:32.088 { 00:30:32.088 "name": null, 00:30:32.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.088 "is_configured": false, 00:30:32.088 "data_offset": 0, 00:30:32.088 "data_size": 63488 00:30:32.088 }, 00:30:32.088 { 00:30:32.088 "name": "BaseBdev3", 00:30:32.088 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:32.088 "is_configured": true, 00:30:32.088 "data_offset": 2048, 00:30:32.088 "data_size": 63488 00:30:32.088 }, 00:30:32.088 { 00:30:32.088 "name": "BaseBdev4", 00:30:32.088 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:32.088 "is_configured": true, 00:30:32.088 "data_offset": 2048, 00:30:32.088 "data_size": 63488 00:30:32.088 } 00:30:32.088 ] 00:30:32.088 }' 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.088 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:32.088 "name": "raid_bdev1", 00:30:32.088 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:32.088 "strip_size_kb": 0, 00:30:32.088 "state": "online", 00:30:32.088 "raid_level": "raid1", 00:30:32.088 "superblock": true, 00:30:32.088 "num_base_bdevs": 4, 00:30:32.088 "num_base_bdevs_discovered": 3, 00:30:32.088 "num_base_bdevs_operational": 3, 00:30:32.088 "base_bdevs_list": [ 00:30:32.088 { 00:30:32.088 "name": "spare", 00:30:32.088 "uuid": "7e4c4292-e82e-5890-b232-af7ae96c0b8a", 00:30:32.088 "is_configured": true, 00:30:32.088 "data_offset": 2048, 00:30:32.088 "data_size": 63488 00:30:32.088 }, 00:30:32.088 { 00:30:32.088 "name": null, 00:30:32.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.088 "is_configured": false, 00:30:32.088 "data_offset": 0, 00:30:32.088 "data_size": 63488 00:30:32.088 }, 00:30:32.088 { 00:30:32.088 "name": "BaseBdev3", 00:30:32.088 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:32.088 "is_configured": true, 00:30:32.088 "data_offset": 2048, 00:30:32.088 "data_size": 63488 00:30:32.088 }, 00:30:32.089 { 00:30:32.089 "name": "BaseBdev4", 00:30:32.089 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:32.089 "is_configured": true, 00:30:32.089 "data_offset": 2048, 00:30:32.089 "data_size": 63488 00:30:32.089 } 00:30:32.089 ] 00:30:32.089 }' 00:30:32.089 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:32.089 13:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:32.656 [2024-10-28 13:41:46.705170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:32.656 [2024-10-28 13:41:46.705347] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:32.656 [2024-10-28 13:41:46.705499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:32.656 [2024-10-28 13:41:46.705618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:32.656 [2024-10-28 13:41:46.705636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:32.656 13:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:32.915 /dev/nbd0 00:30:32.915 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:33.174 1+0 records in 00:30:33.174 1+0 records out 00:30:33.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320621 s, 12.8 MB/s 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:33.174 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:30:33.433 /dev/nbd1 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:33.433 1+0 records in 00:30:33.433 1+0 records out 00:30:33.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332401 s, 12.3 MB/s 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:33.433 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:33.434 13:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:30:33.434 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:33.434 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:33.434 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:33.434 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:30:33.434 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:33.434 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:33.434 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:33.434 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:33.434 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:33.434 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:33.693 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:33.693 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:33.693 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:33.693 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:33.693 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:33.693 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:33.693 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:33.693 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:33.693 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:33.693 13:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:33.951 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:33.951 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:33.951 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:33.951 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:33.951 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:33.951 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:33.951 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:33.951 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:33.951 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:30:33.951 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:30:33.951 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.951 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.210 [2024-10-28 13:41:48.117828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:34.210 [2024-10-28 13:41:48.117910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:34.210 [2024-10-28 13:41:48.117961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:30:34.210 [2024-10-28 13:41:48.117976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:34.210 [2024-10-28 13:41:48.120932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:34.210 [2024-10-28 13:41:48.120980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:34.210 [2024-10-28 13:41:48.121101] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:34.210 [2024-10-28 13:41:48.121174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:34.210 [2024-10-28 13:41:48.121340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:34.210 [2024-10-28 13:41:48.121470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:34.210 spare 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.210 [2024-10-28 13:41:48.221613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:34.210 [2024-10-28 13:41:48.221904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:34.210 [2024-10-28 13:41:48.222428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:30:34.210 [2024-10-28 13:41:48.222695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:34.210 [2024-10-28 13:41:48.222723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:34.210 [2024-10-28 13:41:48.222961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.210 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:34.210 "name": "raid_bdev1", 00:30:34.210 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:34.210 "strip_size_kb": 0, 00:30:34.210 "state": "online", 00:30:34.210 "raid_level": "raid1", 00:30:34.210 "superblock": true, 00:30:34.210 "num_base_bdevs": 4, 00:30:34.211 "num_base_bdevs_discovered": 3, 00:30:34.211 "num_base_bdevs_operational": 3, 00:30:34.211 "base_bdevs_list": [ 00:30:34.211 { 00:30:34.211 "name": "spare", 00:30:34.211 "uuid": "7e4c4292-e82e-5890-b232-af7ae96c0b8a", 00:30:34.211 "is_configured": true, 00:30:34.211 "data_offset": 2048, 00:30:34.211 "data_size": 63488 00:30:34.211 }, 00:30:34.211 { 00:30:34.211 "name": null, 00:30:34.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.211 "is_configured": false, 00:30:34.211 "data_offset": 2048, 00:30:34.211 "data_size": 63488 00:30:34.211 }, 00:30:34.211 { 00:30:34.211 "name": "BaseBdev3", 00:30:34.211 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:34.211 "is_configured": true, 00:30:34.211 "data_offset": 2048, 00:30:34.211 "data_size": 63488 00:30:34.211 }, 00:30:34.211 { 00:30:34.211 "name": "BaseBdev4", 00:30:34.211 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:34.211 "is_configured": true, 00:30:34.211 "data_offset": 2048, 00:30:34.211 "data_size": 63488 00:30:34.211 } 00:30:34.211 ] 00:30:34.211 }' 00:30:34.211 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:34.211 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:34.778 "name": "raid_bdev1", 00:30:34.778 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:34.778 "strip_size_kb": 0, 00:30:34.778 "state": "online", 00:30:34.778 "raid_level": "raid1", 00:30:34.778 "superblock": true, 00:30:34.778 "num_base_bdevs": 4, 00:30:34.778 "num_base_bdevs_discovered": 3, 00:30:34.778 "num_base_bdevs_operational": 3, 00:30:34.778 "base_bdevs_list": [ 00:30:34.778 { 00:30:34.778 "name": "spare", 00:30:34.778 "uuid": "7e4c4292-e82e-5890-b232-af7ae96c0b8a", 00:30:34.778 "is_configured": true, 00:30:34.778 "data_offset": 2048, 00:30:34.778 "data_size": 63488 00:30:34.778 }, 00:30:34.778 { 00:30:34.778 "name": null, 00:30:34.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.778 "is_configured": false, 00:30:34.778 "data_offset": 2048, 00:30:34.778 "data_size": 63488 00:30:34.778 }, 00:30:34.778 { 00:30:34.778 "name": "BaseBdev3", 00:30:34.778 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:34.778 "is_configured": true, 00:30:34.778 "data_offset": 2048, 00:30:34.778 "data_size": 63488 00:30:34.778 }, 00:30:34.778 { 00:30:34.778 "name": "BaseBdev4", 00:30:34.778 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:34.778 "is_configured": true, 00:30:34.778 "data_offset": 2048, 00:30:34.778 "data_size": 63488 00:30:34.778 } 00:30:34.778 ] 00:30:34.778 }' 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.778 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.036 [2024-10-28 13:41:48.935157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:35.036 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.036 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:35.036 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:35.036 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:35.036 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:35.037 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:35.037 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:35.037 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:35.037 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:35.037 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:35.037 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:35.037 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:35.037 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.037 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.037 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:35.037 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.037 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:35.037 "name": "raid_bdev1", 00:30:35.037 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:35.037 "strip_size_kb": 0, 00:30:35.037 "state": "online", 00:30:35.037 "raid_level": "raid1", 00:30:35.037 "superblock": true, 00:30:35.037 "num_base_bdevs": 4, 00:30:35.037 "num_base_bdevs_discovered": 2, 00:30:35.037 "num_base_bdevs_operational": 2, 00:30:35.037 "base_bdevs_list": [ 00:30:35.037 { 00:30:35.037 "name": null, 00:30:35.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.037 "is_configured": false, 00:30:35.037 "data_offset": 0, 00:30:35.037 "data_size": 63488 00:30:35.037 }, 00:30:35.037 { 00:30:35.037 "name": null, 00:30:35.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.037 "is_configured": false, 00:30:35.037 "data_offset": 2048, 00:30:35.037 "data_size": 63488 00:30:35.037 }, 00:30:35.037 { 00:30:35.037 "name": "BaseBdev3", 00:30:35.037 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:35.037 "is_configured": true, 00:30:35.037 "data_offset": 2048, 00:30:35.037 "data_size": 63488 00:30:35.037 }, 00:30:35.037 { 00:30:35.037 "name": "BaseBdev4", 00:30:35.037 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:35.037 "is_configured": true, 00:30:35.037 "data_offset": 2048, 00:30:35.037 "data_size": 63488 00:30:35.037 } 00:30:35.037 ] 00:30:35.037 }' 00:30:35.037 13:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:35.037 13:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.603 13:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:35.603 13:41:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.603 13:41:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.603 [2024-10-28 13:41:49.463291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:35.603 [2024-10-28 13:41:49.464271] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:35.603 [2024-10-28 13:41:49.464300] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:35.603 [2024-10-28 13:41:49.464353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:35.603 [2024-10-28 13:41:49.469813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2160 00:30:35.603 13:41:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.603 13:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:30:35.603 [2024-10-28 13:41:49.472418] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:36.537 "name": "raid_bdev1", 00:30:36.537 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:36.537 "strip_size_kb": 0, 00:30:36.537 "state": "online", 00:30:36.537 "raid_level": "raid1", 00:30:36.537 "superblock": true, 00:30:36.537 "num_base_bdevs": 4, 00:30:36.537 "num_base_bdevs_discovered": 3, 00:30:36.537 "num_base_bdevs_operational": 3, 00:30:36.537 "process": { 00:30:36.537 "type": "rebuild", 00:30:36.537 "target": "spare", 00:30:36.537 "progress": { 00:30:36.537 "blocks": 20480, 00:30:36.537 "percent": 32 00:30:36.537 } 00:30:36.537 }, 00:30:36.537 "base_bdevs_list": [ 00:30:36.537 { 00:30:36.537 "name": "spare", 00:30:36.537 "uuid": "7e4c4292-e82e-5890-b232-af7ae96c0b8a", 00:30:36.537 "is_configured": true, 00:30:36.537 "data_offset": 2048, 00:30:36.537 "data_size": 63488 00:30:36.537 }, 00:30:36.537 { 00:30:36.537 "name": null, 00:30:36.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.537 "is_configured": false, 00:30:36.537 "data_offset": 2048, 00:30:36.537 "data_size": 63488 00:30:36.537 }, 00:30:36.537 { 00:30:36.537 "name": "BaseBdev3", 00:30:36.537 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:36.537 "is_configured": true, 00:30:36.537 "data_offset": 2048, 00:30:36.537 "data_size": 63488 00:30:36.537 }, 00:30:36.537 { 00:30:36.537 "name": "BaseBdev4", 00:30:36.537 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:36.537 "is_configured": true, 00:30:36.537 "data_offset": 2048, 00:30:36.537 "data_size": 63488 00:30:36.537 } 00:30:36.537 ] 00:30:36.537 }' 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.537 [2024-10-28 13:41:50.642639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:36.537 [2024-10-28 13:41:50.681228] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:36.537 [2024-10-28 13:41:50.681319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:36.537 [2024-10-28 13:41:50.681348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:36.537 [2024-10-28 13:41:50.681361] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:36.537 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:36.796 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:36.796 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.796 13:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.796 13:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.796 13:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.796 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:36.796 "name": "raid_bdev1", 00:30:36.796 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:36.796 "strip_size_kb": 0, 00:30:36.796 "state": "online", 00:30:36.796 "raid_level": "raid1", 00:30:36.796 "superblock": true, 00:30:36.796 "num_base_bdevs": 4, 00:30:36.796 "num_base_bdevs_discovered": 2, 00:30:36.796 "num_base_bdevs_operational": 2, 00:30:36.796 "base_bdevs_list": [ 00:30:36.796 { 00:30:36.796 "name": null, 00:30:36.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.796 "is_configured": false, 00:30:36.796 "data_offset": 0, 00:30:36.796 "data_size": 63488 00:30:36.796 }, 00:30:36.796 { 00:30:36.796 "name": null, 00:30:36.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.796 "is_configured": false, 00:30:36.796 "data_offset": 2048, 00:30:36.796 "data_size": 63488 00:30:36.796 }, 00:30:36.796 { 00:30:36.796 "name": "BaseBdev3", 00:30:36.796 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:36.796 "is_configured": true, 00:30:36.796 "data_offset": 2048, 00:30:36.796 "data_size": 63488 00:30:36.796 }, 00:30:36.796 { 00:30:36.796 "name": "BaseBdev4", 00:30:36.796 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:36.796 "is_configured": true, 00:30:36.796 "data_offset": 2048, 00:30:36.796 "data_size": 63488 00:30:36.796 } 00:30:36.796 ] 00:30:36.796 }' 00:30:36.796 13:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:36.796 13:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.362 13:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:37.362 13:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.362 13:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.362 [2024-10-28 13:41:51.243050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:37.362 [2024-10-28 13:41:51.243132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:37.362 [2024-10-28 13:41:51.243193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:30:37.362 [2024-10-28 13:41:51.243210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:37.362 [2024-10-28 13:41:51.243823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:37.362 [2024-10-28 13:41:51.243866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:37.362 [2024-10-28 13:41:51.244004] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:37.362 [2024-10-28 13:41:51.244025] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:37.362 [2024-10-28 13:41:51.244046] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:37.362 [2024-10-28 13:41:51.244076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:37.362 [2024-10-28 13:41:51.249681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2230 00:30:37.362 spare 00:30:37.362 13:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.362 13:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:30:37.362 [2024-10-28 13:41:51.252362] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:38.297 "name": "raid_bdev1", 00:30:38.297 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:38.297 "strip_size_kb": 0, 00:30:38.297 "state": "online", 00:30:38.297 "raid_level": "raid1", 00:30:38.297 "superblock": true, 00:30:38.297 "num_base_bdevs": 4, 00:30:38.297 "num_base_bdevs_discovered": 3, 00:30:38.297 "num_base_bdevs_operational": 3, 00:30:38.297 "process": { 00:30:38.297 "type": "rebuild", 00:30:38.297 "target": "spare", 00:30:38.297 "progress": { 00:30:38.297 "blocks": 20480, 00:30:38.297 "percent": 32 00:30:38.297 } 00:30:38.297 }, 00:30:38.297 "base_bdevs_list": [ 00:30:38.297 { 00:30:38.297 "name": "spare", 00:30:38.297 "uuid": "7e4c4292-e82e-5890-b232-af7ae96c0b8a", 00:30:38.297 "is_configured": true, 00:30:38.297 "data_offset": 2048, 00:30:38.297 "data_size": 63488 00:30:38.297 }, 00:30:38.297 { 00:30:38.297 "name": null, 00:30:38.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.297 "is_configured": false, 00:30:38.297 "data_offset": 2048, 00:30:38.297 "data_size": 63488 00:30:38.297 }, 00:30:38.297 { 00:30:38.297 "name": "BaseBdev3", 00:30:38.297 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:38.297 "is_configured": true, 00:30:38.297 "data_offset": 2048, 00:30:38.297 "data_size": 63488 00:30:38.297 }, 00:30:38.297 { 00:30:38.297 "name": "BaseBdev4", 00:30:38.297 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:38.297 "is_configured": true, 00:30:38.297 "data_offset": 2048, 00:30:38.297 "data_size": 63488 00:30:38.297 } 00:30:38.297 ] 00:30:38.297 }' 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:30:38.297 13:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.298 13:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.298 [2024-10-28 13:41:52.430500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:38.556 [2024-10-28 13:41:52.461116] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:38.556 [2024-10-28 13:41:52.461244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:38.556 [2024-10-28 13:41:52.461271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:38.556 [2024-10-28 13:41:52.461285] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:38.556 "name": "raid_bdev1", 00:30:38.556 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:38.556 "strip_size_kb": 0, 00:30:38.556 "state": "online", 00:30:38.556 "raid_level": "raid1", 00:30:38.556 "superblock": true, 00:30:38.556 "num_base_bdevs": 4, 00:30:38.556 "num_base_bdevs_discovered": 2, 00:30:38.556 "num_base_bdevs_operational": 2, 00:30:38.556 "base_bdevs_list": [ 00:30:38.556 { 00:30:38.556 "name": null, 00:30:38.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.556 "is_configured": false, 00:30:38.556 "data_offset": 0, 00:30:38.556 "data_size": 63488 00:30:38.556 }, 00:30:38.556 { 00:30:38.556 "name": null, 00:30:38.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.556 "is_configured": false, 00:30:38.556 "data_offset": 2048, 00:30:38.556 "data_size": 63488 00:30:38.556 }, 00:30:38.556 { 00:30:38.556 "name": "BaseBdev3", 00:30:38.556 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:38.556 "is_configured": true, 00:30:38.556 "data_offset": 2048, 00:30:38.556 "data_size": 63488 00:30:38.556 }, 00:30:38.556 { 00:30:38.556 "name": "BaseBdev4", 00:30:38.556 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:38.556 "is_configured": true, 00:30:38.556 "data_offset": 2048, 00:30:38.556 "data_size": 63488 00:30:38.556 } 00:30:38.556 ] 00:30:38.556 }' 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:38.556 13:41:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:39.125 "name": "raid_bdev1", 00:30:39.125 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:39.125 "strip_size_kb": 0, 00:30:39.125 "state": "online", 00:30:39.125 "raid_level": "raid1", 00:30:39.125 "superblock": true, 00:30:39.125 "num_base_bdevs": 4, 00:30:39.125 "num_base_bdevs_discovered": 2, 00:30:39.125 "num_base_bdevs_operational": 2, 00:30:39.125 "base_bdevs_list": [ 00:30:39.125 { 00:30:39.125 "name": null, 00:30:39.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.125 "is_configured": false, 00:30:39.125 "data_offset": 0, 00:30:39.125 "data_size": 63488 00:30:39.125 }, 00:30:39.125 { 00:30:39.125 "name": null, 00:30:39.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.125 "is_configured": false, 00:30:39.125 "data_offset": 2048, 00:30:39.125 "data_size": 63488 00:30:39.125 }, 00:30:39.125 { 00:30:39.125 "name": "BaseBdev3", 00:30:39.125 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:39.125 "is_configured": true, 00:30:39.125 "data_offset": 2048, 00:30:39.125 "data_size": 63488 00:30:39.125 }, 00:30:39.125 { 00:30:39.125 "name": "BaseBdev4", 00:30:39.125 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:39.125 "is_configured": true, 00:30:39.125 "data_offset": 2048, 00:30:39.125 "data_size": 63488 00:30:39.125 } 00:30:39.125 ] 00:30:39.125 }' 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.125 [2024-10-28 13:41:53.214953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:39.125 [2024-10-28 13:41:53.215029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:39.125 [2024-10-28 13:41:53.215075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:30:39.125 [2024-10-28 13:41:53.215095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:39.125 [2024-10-28 13:41:53.215778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:39.125 [2024-10-28 13:41:53.215943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:39.125 [2024-10-28 13:41:53.216066] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:39.125 [2024-10-28 13:41:53.216095] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:39.125 [2024-10-28 13:41:53.216107] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:39.125 [2024-10-28 13:41:53.216124] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:30:39.125 BaseBdev1 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.125 13:41:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:40.502 "name": "raid_bdev1", 00:30:40.502 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:40.502 "strip_size_kb": 0, 00:30:40.502 "state": "online", 00:30:40.502 "raid_level": "raid1", 00:30:40.502 "superblock": true, 00:30:40.502 "num_base_bdevs": 4, 00:30:40.502 "num_base_bdevs_discovered": 2, 00:30:40.502 "num_base_bdevs_operational": 2, 00:30:40.502 "base_bdevs_list": [ 00:30:40.502 { 00:30:40.502 "name": null, 00:30:40.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.502 "is_configured": false, 00:30:40.502 "data_offset": 0, 00:30:40.502 "data_size": 63488 00:30:40.502 }, 00:30:40.502 { 00:30:40.502 "name": null, 00:30:40.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.502 "is_configured": false, 00:30:40.502 "data_offset": 2048, 00:30:40.502 "data_size": 63488 00:30:40.502 }, 00:30:40.502 { 00:30:40.502 "name": "BaseBdev3", 00:30:40.502 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:40.502 "is_configured": true, 00:30:40.502 "data_offset": 2048, 00:30:40.502 "data_size": 63488 00:30:40.502 }, 00:30:40.502 { 00:30:40.502 "name": "BaseBdev4", 00:30:40.502 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:40.502 "is_configured": true, 00:30:40.502 "data_offset": 2048, 00:30:40.502 "data_size": 63488 00:30:40.502 } 00:30:40.502 ] 00:30:40.502 }' 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:40.502 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:40.761 "name": "raid_bdev1", 00:30:40.761 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:40.761 "strip_size_kb": 0, 00:30:40.761 "state": "online", 00:30:40.761 "raid_level": "raid1", 00:30:40.761 "superblock": true, 00:30:40.761 "num_base_bdevs": 4, 00:30:40.761 "num_base_bdevs_discovered": 2, 00:30:40.761 "num_base_bdevs_operational": 2, 00:30:40.761 "base_bdevs_list": [ 00:30:40.761 { 00:30:40.761 "name": null, 00:30:40.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.761 "is_configured": false, 00:30:40.761 "data_offset": 0, 00:30:40.761 "data_size": 63488 00:30:40.761 }, 00:30:40.761 { 00:30:40.761 "name": null, 00:30:40.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.761 "is_configured": false, 00:30:40.761 "data_offset": 2048, 00:30:40.761 "data_size": 63488 00:30:40.761 }, 00:30:40.761 { 00:30:40.761 "name": "BaseBdev3", 00:30:40.761 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:40.761 "is_configured": true, 00:30:40.761 "data_offset": 2048, 00:30:40.761 "data_size": 63488 00:30:40.761 }, 00:30:40.761 { 00:30:40.761 "name": "BaseBdev4", 00:30:40.761 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:40.761 "is_configured": true, 00:30:40.761 "data_offset": 2048, 00:30:40.761 "data_size": 63488 00:30:40.761 } 00:30:40.761 ] 00:30:40.761 }' 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.761 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.020 [2024-10-28 13:41:54.919594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:41.020 [2024-10-28 13:41:54.919843] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:41.020 [2024-10-28 13:41:54.919864] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:41.020 request: 00:30:41.020 { 00:30:41.020 "base_bdev": "BaseBdev1", 00:30:41.020 "raid_bdev": "raid_bdev1", 00:30:41.020 "method": "bdev_raid_add_base_bdev", 00:30:41.020 "req_id": 1 00:30:41.020 } 00:30:41.020 Got JSON-RPC error response 00:30:41.020 response: 00:30:41.020 { 00:30:41.020 "code": -22, 00:30:41.020 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:41.020 } 00:30:41.020 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:41.020 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:30:41.020 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:41.020 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:41.020 13:41:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:41.020 13:41:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.962 13:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:41.962 "name": "raid_bdev1", 00:30:41.962 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:41.962 "strip_size_kb": 0, 00:30:41.962 "state": "online", 00:30:41.962 "raid_level": "raid1", 00:30:41.962 "superblock": true, 00:30:41.962 "num_base_bdevs": 4, 00:30:41.962 "num_base_bdevs_discovered": 2, 00:30:41.962 "num_base_bdevs_operational": 2, 00:30:41.962 "base_bdevs_list": [ 00:30:41.962 { 00:30:41.962 "name": null, 00:30:41.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.962 "is_configured": false, 00:30:41.962 "data_offset": 0, 00:30:41.962 "data_size": 63488 00:30:41.962 }, 00:30:41.962 { 00:30:41.962 "name": null, 00:30:41.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.962 "is_configured": false, 00:30:41.962 "data_offset": 2048, 00:30:41.962 "data_size": 63488 00:30:41.963 }, 00:30:41.963 { 00:30:41.963 "name": "BaseBdev3", 00:30:41.963 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:41.963 "is_configured": true, 00:30:41.963 "data_offset": 2048, 00:30:41.963 "data_size": 63488 00:30:41.963 }, 00:30:41.963 { 00:30:41.963 "name": "BaseBdev4", 00:30:41.963 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:41.963 "is_configured": true, 00:30:41.963 "data_offset": 2048, 00:30:41.963 "data_size": 63488 00:30:41.963 } 00:30:41.963 ] 00:30:41.963 }' 00:30:41.963 13:41:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:41.963 13:41:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:42.527 "name": "raid_bdev1", 00:30:42.527 "uuid": "0526e7bc-3e6b-44f1-a7b1-0f6343ca786c", 00:30:42.527 "strip_size_kb": 0, 00:30:42.527 "state": "online", 00:30:42.527 "raid_level": "raid1", 00:30:42.527 "superblock": true, 00:30:42.527 "num_base_bdevs": 4, 00:30:42.527 "num_base_bdevs_discovered": 2, 00:30:42.527 "num_base_bdevs_operational": 2, 00:30:42.527 "base_bdevs_list": [ 00:30:42.527 { 00:30:42.527 "name": null, 00:30:42.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.527 "is_configured": false, 00:30:42.527 "data_offset": 0, 00:30:42.527 "data_size": 63488 00:30:42.527 }, 00:30:42.527 { 00:30:42.527 "name": null, 00:30:42.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.527 "is_configured": false, 00:30:42.527 "data_offset": 2048, 00:30:42.527 "data_size": 63488 00:30:42.527 }, 00:30:42.527 { 00:30:42.527 "name": "BaseBdev3", 00:30:42.527 "uuid": "97c93073-def1-5366-bbee-77846a9b83af", 00:30:42.527 "is_configured": true, 00:30:42.527 "data_offset": 2048, 00:30:42.527 "data_size": 63488 00:30:42.527 }, 00:30:42.527 { 00:30:42.527 "name": "BaseBdev4", 00:30:42.527 "uuid": "96b1d804-54b8-5a11-bf1d-19ad7132a6a5", 00:30:42.527 "is_configured": true, 00:30:42.527 "data_offset": 2048, 00:30:42.527 "data_size": 63488 00:30:42.527 } 00:30:42.527 ] 00:30:42.527 }' 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 90725 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 90725 ']' 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 90725 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90725 00:30:42.527 killing process with pid 90725 00:30:42.527 Received shutdown signal, test time was about 60.000000 seconds 00:30:42.527 00:30:42.527 Latency(us) 00:30:42.527 [2024-10-28T13:41:56.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.527 [2024-10-28T13:41:56.687Z] =================================================================================================================== 00:30:42.527 [2024-10-28T13:41:56.687Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90725' 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 90725 00:30:42.527 [2024-10-28 13:41:56.669156] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:42.527 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 90725 00:30:42.527 [2024-10-28 13:41:56.669314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:42.528 [2024-10-28 13:41:56.669418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:42.528 [2024-10-28 13:41:56.669436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:42.784 [2024-10-28 13:41:56.730146] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:43.042 ************************************ 00:30:43.042 END TEST raid_rebuild_test_sb 00:30:43.042 ************************************ 00:30:43.042 13:41:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:30:43.042 00:30:43.042 real 0m27.816s 00:30:43.042 user 0m34.078s 00:30:43.042 sys 0m4.085s 00:30:43.042 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:43.042 13:41:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.042 13:41:57 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:30:43.042 13:41:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:30:43.042 13:41:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:43.042 13:41:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:43.042 ************************************ 00:30:43.042 START TEST raid_rebuild_test_io 00:30:43.042 ************************************ 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=91523 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 91523 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 91523 ']' 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:43.042 13:41:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:43.042 [2024-10-28 13:41:57.128230] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:30:43.042 [2024-10-28 13:41:57.129297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91523 ] 00:30:43.042 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:43.042 Zero copy mechanism will not be used. 00:30:43.299 [2024-10-28 13:41:57.272722] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:43.299 [2024-10-28 13:41:57.299625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.299 [2024-10-28 13:41:57.353749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.299 [2024-10-28 13:41:57.411229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:43.299 [2024-10-28 13:41:57.411273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.230 BaseBdev1_malloc 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.230 [2024-10-28 13:41:58.206493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:44.230 [2024-10-28 13:41:58.206738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:44.230 [2024-10-28 13:41:58.206796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:44.230 [2024-10-28 13:41:58.206822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:44.230 [2024-10-28 13:41:58.209775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:44.230 [2024-10-28 13:41:58.209947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:44.230 BaseBdev1 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.230 BaseBdev2_malloc 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.230 [2024-10-28 13:41:58.238672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:44.230 [2024-10-28 13:41:58.238748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:44.230 [2024-10-28 13:41:58.238779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:44.230 [2024-10-28 13:41:58.238796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:44.230 [2024-10-28 13:41:58.241669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:44.230 [2024-10-28 13:41:58.241721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:44.230 BaseBdev2 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.230 BaseBdev3_malloc 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.230 [2024-10-28 13:41:58.266529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:44.230 [2024-10-28 13:41:58.266605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:44.230 [2024-10-28 13:41:58.266638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:44.230 [2024-10-28 13:41:58.266656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:44.230 [2024-10-28 13:41:58.269417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:44.230 [2024-10-28 13:41:58.269468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:44.230 BaseBdev3 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.230 BaseBdev4_malloc 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.230 [2024-10-28 13:41:58.313045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:44.230 [2024-10-28 13:41:58.313126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:44.230 [2024-10-28 13:41:58.313176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:44.230 [2024-10-28 13:41:58.313195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:44.230 [2024-10-28 13:41:58.316022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:44.230 [2024-10-28 13:41:58.316072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:44.230 BaseBdev4 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.230 spare_malloc 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.230 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.231 spare_delay 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.231 [2024-10-28 13:41:58.353175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:44.231 [2024-10-28 13:41:58.353258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:44.231 [2024-10-28 13:41:58.353292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:44.231 [2024-10-28 13:41:58.353309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:44.231 [2024-10-28 13:41:58.356265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:44.231 [2024-10-28 13:41:58.356321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:44.231 spare 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.231 [2024-10-28 13:41:58.361264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:44.231 [2024-10-28 13:41:58.363713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:44.231 [2024-10-28 13:41:58.363823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:44.231 [2024-10-28 13:41:58.363897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:44.231 [2024-10-28 13:41:58.364030] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:30:44.231 [2024-10-28 13:41:58.364052] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:44.231 [2024-10-28 13:41:58.364424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:44.231 [2024-10-28 13:41:58.364637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:30:44.231 [2024-10-28 13:41:58.364654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:30:44.231 [2024-10-28 13:41:58.364823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.231 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.488 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.488 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:44.488 "name": "raid_bdev1", 00:30:44.488 "uuid": "9ec369cb-89ed-4846-8b0a-ac46f5bf8845", 00:30:44.488 "strip_size_kb": 0, 00:30:44.488 "state": "online", 00:30:44.488 "raid_level": "raid1", 00:30:44.488 "superblock": false, 00:30:44.488 "num_base_bdevs": 4, 00:30:44.488 "num_base_bdevs_discovered": 4, 00:30:44.488 "num_base_bdevs_operational": 4, 00:30:44.488 "base_bdevs_list": [ 00:30:44.488 { 00:30:44.489 "name": "BaseBdev1", 00:30:44.489 "uuid": "78bcbc05-7501-5511-b966-5345b01c4e6b", 00:30:44.489 "is_configured": true, 00:30:44.489 "data_offset": 0, 00:30:44.489 "data_size": 65536 00:30:44.489 }, 00:30:44.489 { 00:30:44.489 "name": "BaseBdev2", 00:30:44.489 "uuid": "96a7622f-96b9-5de7-a736-e1dbff81fc26", 00:30:44.489 "is_configured": true, 00:30:44.489 "data_offset": 0, 00:30:44.489 "data_size": 65536 00:30:44.489 }, 00:30:44.489 { 00:30:44.489 "name": "BaseBdev3", 00:30:44.489 "uuid": "ec015e87-eda3-5ffd-a98b-27454a83b9b9", 00:30:44.489 "is_configured": true, 00:30:44.489 "data_offset": 0, 00:30:44.489 "data_size": 65536 00:30:44.489 }, 00:30:44.489 { 00:30:44.489 "name": "BaseBdev4", 00:30:44.489 "uuid": "31d41294-3bfb-5a52-bd40-af2b154abc1a", 00:30:44.489 "is_configured": true, 00:30:44.489 "data_offset": 0, 00:30:44.489 "data_size": 65536 00:30:44.489 } 00:30:44.489 ] 00:30:44.489 }' 00:30:44.489 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:44.489 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.747 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:44.747 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.747 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:44.747 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:30:44.747 [2024-10-28 13:41:58.853818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:44.747 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.747 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:30:44.747 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:44.747 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.747 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.747 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:45.005 [2024-10-28 13:41:58.957416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:45.005 13:41:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.005 13:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:45.005 "name": "raid_bdev1", 00:30:45.005 "uuid": "9ec369cb-89ed-4846-8b0a-ac46f5bf8845", 00:30:45.005 "strip_size_kb": 0, 00:30:45.005 "state": "online", 00:30:45.005 "raid_level": "raid1", 00:30:45.005 "superblock": false, 00:30:45.005 "num_base_bdevs": 4, 00:30:45.005 "num_base_bdevs_discovered": 3, 00:30:45.005 "num_base_bdevs_operational": 3, 00:30:45.005 "base_bdevs_list": [ 00:30:45.005 { 00:30:45.005 "name": null, 00:30:45.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.005 "is_configured": false, 00:30:45.005 "data_offset": 0, 00:30:45.005 "data_size": 65536 00:30:45.005 }, 00:30:45.005 { 00:30:45.005 "name": "BaseBdev2", 00:30:45.005 "uuid": "96a7622f-96b9-5de7-a736-e1dbff81fc26", 00:30:45.005 "is_configured": true, 00:30:45.005 "data_offset": 0, 00:30:45.005 "data_size": 65536 00:30:45.005 }, 00:30:45.005 { 00:30:45.005 "name": "BaseBdev3", 00:30:45.005 "uuid": "ec015e87-eda3-5ffd-a98b-27454a83b9b9", 00:30:45.005 "is_configured": true, 00:30:45.005 "data_offset": 0, 00:30:45.005 "data_size": 65536 00:30:45.005 }, 00:30:45.005 { 00:30:45.005 "name": "BaseBdev4", 00:30:45.005 "uuid": "31d41294-3bfb-5a52-bd40-af2b154abc1a", 00:30:45.005 "is_configured": true, 00:30:45.005 "data_offset": 0, 00:30:45.005 "data_size": 65536 00:30:45.005 } 00:30:45.005 ] 00:30:45.005 }' 00:30:45.005 13:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:45.005 13:41:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:45.005 [2024-10-28 13:41:59.068283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:30:45.005 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:45.005 Zero copy mechanism will not be used. 00:30:45.005 Running I/O for 60 seconds... 00:30:45.593 13:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:45.593 13:41:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.593 13:41:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:45.593 [2024-10-28 13:41:59.477902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:45.593 13:41:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.593 13:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:30:45.593 [2024-10-28 13:41:59.552217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:30:45.593 [2024-10-28 13:41:59.556068] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:45.593 [2024-10-28 13:41:59.701660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:45.856 [2024-10-28 13:41:59.950699] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:45.856 [2024-10-28 13:41:59.952470] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:46.372 135.00 IOPS, 405.00 MiB/s [2024-10-28T13:42:00.532Z] [2024-10-28 13:42:00.328555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:46.372 [2024-10-28 13:42:00.330436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:46.372 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:46.372 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:46.372 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:46.372 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:46.372 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:46.631 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:46.631 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:46.631 13:42:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.631 13:42:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:46.631 13:42:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.631 [2024-10-28 13:42:00.554938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:46.631 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:46.631 "name": "raid_bdev1", 00:30:46.631 "uuid": "9ec369cb-89ed-4846-8b0a-ac46f5bf8845", 00:30:46.631 "strip_size_kb": 0, 00:30:46.631 "state": "online", 00:30:46.631 "raid_level": "raid1", 00:30:46.631 "superblock": false, 00:30:46.631 "num_base_bdevs": 4, 00:30:46.631 "num_base_bdevs_discovered": 4, 00:30:46.631 "num_base_bdevs_operational": 4, 00:30:46.631 "process": { 00:30:46.631 "type": "rebuild", 00:30:46.631 "target": "spare", 00:30:46.631 "progress": { 00:30:46.631 "blocks": 8192, 00:30:46.631 "percent": 12 00:30:46.631 } 00:30:46.631 }, 00:30:46.631 "base_bdevs_list": [ 00:30:46.631 { 00:30:46.631 "name": "spare", 00:30:46.631 "uuid": "512f945c-2953-53df-8ee6-b7073a137194", 00:30:46.631 "is_configured": true, 00:30:46.631 "data_offset": 0, 00:30:46.631 "data_size": 65536 00:30:46.631 }, 00:30:46.631 { 00:30:46.631 "name": "BaseBdev2", 00:30:46.631 "uuid": "96a7622f-96b9-5de7-a736-e1dbff81fc26", 00:30:46.631 "is_configured": true, 00:30:46.631 "data_offset": 0, 00:30:46.631 "data_size": 65536 00:30:46.631 }, 00:30:46.631 { 00:30:46.631 "name": "BaseBdev3", 00:30:46.631 "uuid": "ec015e87-eda3-5ffd-a98b-27454a83b9b9", 00:30:46.631 "is_configured": true, 00:30:46.631 "data_offset": 0, 00:30:46.631 "data_size": 65536 00:30:46.631 }, 00:30:46.631 { 00:30:46.631 "name": "BaseBdev4", 00:30:46.631 "uuid": "31d41294-3bfb-5a52-bd40-af2b154abc1a", 00:30:46.631 "is_configured": true, 00:30:46.631 "data_offset": 0, 00:30:46.631 "data_size": 65536 00:30:46.631 } 00:30:46.631 ] 00:30:46.631 }' 00:30:46.631 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:46.631 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:46.631 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:46.631 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:46.631 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:46.631 13:42:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.631 13:42:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:46.631 [2024-10-28 13:42:00.669015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:46.631 [2024-10-28 13:42:00.678335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:46.889 [2024-10-28 13:42:00.798633] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:46.889 [2024-10-28 13:42:00.821578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:46.889 [2024-10-28 13:42:00.821701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:46.889 [2024-10-28 13:42:00.821725] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:46.889 [2024-10-28 13:42:00.857116] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006630 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:46.889 "name": "raid_bdev1", 00:30:46.889 "uuid": "9ec369cb-89ed-4846-8b0a-ac46f5bf8845", 00:30:46.889 "strip_size_kb": 0, 00:30:46.889 "state": "online", 00:30:46.889 "raid_level": "raid1", 00:30:46.889 "superblock": false, 00:30:46.889 "num_base_bdevs": 4, 00:30:46.889 "num_base_bdevs_discovered": 3, 00:30:46.889 "num_base_bdevs_operational": 3, 00:30:46.889 "base_bdevs_list": [ 00:30:46.889 { 00:30:46.889 "name": null, 00:30:46.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.889 "is_configured": false, 00:30:46.889 "data_offset": 0, 00:30:46.889 "data_size": 65536 00:30:46.889 }, 00:30:46.889 { 00:30:46.889 "name": "BaseBdev2", 00:30:46.889 "uuid": "96a7622f-96b9-5de7-a736-e1dbff81fc26", 00:30:46.889 "is_configured": true, 00:30:46.889 "data_offset": 0, 00:30:46.889 "data_size": 65536 00:30:46.889 }, 00:30:46.889 { 00:30:46.889 "name": "BaseBdev3", 00:30:46.889 "uuid": "ec015e87-eda3-5ffd-a98b-27454a83b9b9", 00:30:46.889 "is_configured": true, 00:30:46.889 "data_offset": 0, 00:30:46.889 "data_size": 65536 00:30:46.889 }, 00:30:46.889 { 00:30:46.889 "name": "BaseBdev4", 00:30:46.889 "uuid": "31d41294-3bfb-5a52-bd40-af2b154abc1a", 00:30:46.889 "is_configured": true, 00:30:46.889 "data_offset": 0, 00:30:46.889 "data_size": 65536 00:30:46.889 } 00:30:46.889 ] 00:30:46.889 }' 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:46.889 13:42:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.405 113.50 IOPS, 340.50 MiB/s [2024-10-28T13:42:01.565Z] 13:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:47.405 13:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:47.406 "name": "raid_bdev1", 00:30:47.406 "uuid": "9ec369cb-89ed-4846-8b0a-ac46f5bf8845", 00:30:47.406 "strip_size_kb": 0, 00:30:47.406 "state": "online", 00:30:47.406 "raid_level": "raid1", 00:30:47.406 "superblock": false, 00:30:47.406 "num_base_bdevs": 4, 00:30:47.406 "num_base_bdevs_discovered": 3, 00:30:47.406 "num_base_bdevs_operational": 3, 00:30:47.406 "base_bdevs_list": [ 00:30:47.406 { 00:30:47.406 "name": null, 00:30:47.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.406 "is_configured": false, 00:30:47.406 "data_offset": 0, 00:30:47.406 "data_size": 65536 00:30:47.406 }, 00:30:47.406 { 00:30:47.406 "name": "BaseBdev2", 00:30:47.406 "uuid": "96a7622f-96b9-5de7-a736-e1dbff81fc26", 00:30:47.406 "is_configured": true, 00:30:47.406 "data_offset": 0, 00:30:47.406 "data_size": 65536 00:30:47.406 }, 00:30:47.406 { 00:30:47.406 "name": "BaseBdev3", 00:30:47.406 "uuid": "ec015e87-eda3-5ffd-a98b-27454a83b9b9", 00:30:47.406 "is_configured": true, 00:30:47.406 "data_offset": 0, 00:30:47.406 "data_size": 65536 00:30:47.406 }, 00:30:47.406 { 00:30:47.406 "name": "BaseBdev4", 00:30:47.406 "uuid": "31d41294-3bfb-5a52-bd40-af2b154abc1a", 00:30:47.406 "is_configured": true, 00:30:47.406 "data_offset": 0, 00:30:47.406 "data_size": 65536 00:30:47.406 } 00:30:47.406 ] 00:30:47.406 }' 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:47.406 [2024-10-28 13:42:01.485003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.406 13:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:30:47.406 [2024-10-28 13:42:01.545954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:30:47.406 [2024-10-28 13:42:01.548692] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:47.664 [2024-10-28 13:42:01.661475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:47.664 [2024-10-28 13:42:01.662120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:47.923 [2024-10-28 13:42:01.865601] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:47.923 [2024-10-28 13:42:01.865972] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:48.181 114.33 IOPS, 343.00 MiB/s [2024-10-28T13:42:02.341Z] [2024-10-28 13:42:02.236081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:48.439 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:48.439 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:48.439 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:48.439 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:48.439 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:48.439 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.439 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.439 13:42:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.439 13:42:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:48.439 13:42:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.439 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:48.439 "name": "raid_bdev1", 00:30:48.439 "uuid": "9ec369cb-89ed-4846-8b0a-ac46f5bf8845", 00:30:48.439 "strip_size_kb": 0, 00:30:48.439 "state": "online", 00:30:48.439 "raid_level": "raid1", 00:30:48.439 "superblock": false, 00:30:48.439 "num_base_bdevs": 4, 00:30:48.439 "num_base_bdevs_discovered": 4, 00:30:48.439 "num_base_bdevs_operational": 4, 00:30:48.439 "process": { 00:30:48.439 "type": "rebuild", 00:30:48.439 "target": "spare", 00:30:48.439 "progress": { 00:30:48.439 "blocks": 12288, 00:30:48.439 "percent": 18 00:30:48.439 } 00:30:48.439 }, 00:30:48.439 "base_bdevs_list": [ 00:30:48.439 { 00:30:48.439 "name": "spare", 00:30:48.439 "uuid": "512f945c-2953-53df-8ee6-b7073a137194", 00:30:48.439 "is_configured": true, 00:30:48.439 "data_offset": 0, 00:30:48.439 "data_size": 65536 00:30:48.439 }, 00:30:48.439 { 00:30:48.439 "name": "BaseBdev2", 00:30:48.439 "uuid": "96a7622f-96b9-5de7-a736-e1dbff81fc26", 00:30:48.439 "is_configured": true, 00:30:48.439 "data_offset": 0, 00:30:48.439 "data_size": 65536 00:30:48.439 }, 00:30:48.439 { 00:30:48.439 "name": "BaseBdev3", 00:30:48.439 "uuid": "ec015e87-eda3-5ffd-a98b-27454a83b9b9", 00:30:48.439 "is_configured": true, 00:30:48.439 "data_offset": 0, 00:30:48.439 "data_size": 65536 00:30:48.439 }, 00:30:48.439 { 00:30:48.439 "name": "BaseBdev4", 00:30:48.439 "uuid": "31d41294-3bfb-5a52-bd40-af2b154abc1a", 00:30:48.439 "is_configured": true, 00:30:48.439 "data_offset": 0, 00:30:48.439 "data_size": 65536 00:30:48.439 } 00:30:48.439 ] 00:30:48.439 }' 00:30:48.439 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:48.698 [2024-10-28 13:42:02.606675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:48.698 [2024-10-28 13:42:02.676758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:48.698 [2024-10-28 13:42:02.721095] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:48.698 [2024-10-28 13:42:02.722009] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:48.698 [2024-10-28 13:42:02.825120] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006630 00:30:48.698 [2024-10-28 13:42:02.825434] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000067d0 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.698 13:42:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:48.957 13:42:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.957 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:48.957 "name": "raid_bdev1", 00:30:48.957 "uuid": "9ec369cb-89ed-4846-8b0a-ac46f5bf8845", 00:30:48.957 "strip_size_kb": 0, 00:30:48.957 "state": "online", 00:30:48.957 "raid_level": "raid1", 00:30:48.957 "superblock": false, 00:30:48.957 "num_base_bdevs": 4, 00:30:48.957 "num_base_bdevs_discovered": 3, 00:30:48.957 "num_base_bdevs_operational": 3, 00:30:48.957 "process": { 00:30:48.957 "type": "rebuild", 00:30:48.957 "target": "spare", 00:30:48.957 "progress": { 00:30:48.957 "blocks": 16384, 00:30:48.957 "percent": 25 00:30:48.957 } 00:30:48.957 }, 00:30:48.957 "base_bdevs_list": [ 00:30:48.957 { 00:30:48.957 "name": "spare", 00:30:48.957 "uuid": "512f945c-2953-53df-8ee6-b7073a137194", 00:30:48.957 "is_configured": true, 00:30:48.957 "data_offset": 0, 00:30:48.957 "data_size": 65536 00:30:48.957 }, 00:30:48.957 { 00:30:48.957 "name": null, 00:30:48.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.957 "is_configured": false, 00:30:48.957 "data_offset": 0, 00:30:48.957 "data_size": 65536 00:30:48.957 }, 00:30:48.957 { 00:30:48.957 "name": "BaseBdev3", 00:30:48.957 "uuid": "ec015e87-eda3-5ffd-a98b-27454a83b9b9", 00:30:48.957 "is_configured": true, 00:30:48.957 "data_offset": 0, 00:30:48.957 "data_size": 65536 00:30:48.957 }, 00:30:48.957 { 00:30:48.957 "name": "BaseBdev4", 00:30:48.957 "uuid": "31d41294-3bfb-5a52-bd40-af2b154abc1a", 00:30:48.957 "is_configured": true, 00:30:48.957 "data_offset": 0, 00:30:48.957 "data_size": 65536 00:30:48.957 } 00:30:48.957 ] 00:30:48.957 }' 00:30:48.957 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:48.957 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:48.957 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:48.957 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:48.958 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=456 00:30:48.958 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:48.958 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:48.958 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:48.958 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:48.958 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:48.958 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:48.958 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.958 13:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.958 13:42:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.958 13:42:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:48.958 13:42:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.958 13:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:48.958 "name": "raid_bdev1", 00:30:48.958 "uuid": "9ec369cb-89ed-4846-8b0a-ac46f5bf8845", 00:30:48.958 "strip_size_kb": 0, 00:30:48.958 "state": "online", 00:30:48.958 "raid_level": "raid1", 00:30:48.958 "superblock": false, 00:30:48.958 "num_base_bdevs": 4, 00:30:48.958 "num_base_bdevs_discovered": 3, 00:30:48.958 "num_base_bdevs_operational": 3, 00:30:48.958 "process": { 00:30:48.958 "type": "rebuild", 00:30:48.958 "target": "spare", 00:30:48.958 "progress": { 00:30:48.958 "blocks": 18432, 00:30:48.958 "percent": 28 00:30:48.958 } 00:30:48.958 }, 00:30:48.958 "base_bdevs_list": [ 00:30:48.958 { 00:30:48.958 "name": "spare", 00:30:48.958 "uuid": "512f945c-2953-53df-8ee6-b7073a137194", 00:30:48.958 "is_configured": true, 00:30:48.958 "data_offset": 0, 00:30:48.958 "data_size": 65536 00:30:48.958 }, 00:30:48.958 { 00:30:48.958 "name": null, 00:30:48.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.958 "is_configured": false, 00:30:48.958 "data_offset": 0, 00:30:48.958 "data_size": 65536 00:30:48.958 }, 00:30:48.958 { 00:30:48.958 "name": "BaseBdev3", 00:30:48.958 "uuid": "ec015e87-eda3-5ffd-a98b-27454a83b9b9", 00:30:48.958 "is_configured": true, 00:30:48.958 "data_offset": 0, 00:30:48.958 "data_size": 65536 00:30:48.958 }, 00:30:48.958 { 00:30:48.958 "name": "BaseBdev4", 00:30:48.958 "uuid": "31d41294-3bfb-5a52-bd40-af2b154abc1a", 00:30:48.958 "is_configured": true, 00:30:48.958 "data_offset": 0, 00:30:48.958 "data_size": 65536 00:30:48.958 } 00:30:48.958 ] 00:30:48.958 }' 00:30:48.958 13:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:48.958 [2024-10-28 13:42:03.087036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:48.958 99.50 IOPS, 298.50 MiB/s [2024-10-28T13:42:03.118Z] 13:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:48.958 13:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:49.216 13:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:49.216 13:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:49.216 [2024-10-28 13:42:03.310129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:49.216 [2024-10-28 13:42:03.310517] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:49.783 [2024-10-28 13:42:03.636173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:30:50.042 93.40 IOPS, 280.20 MiB/s [2024-10-28T13:42:04.202Z] [2024-10-28 13:42:04.136446] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:30:50.042 13:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:50.042 13:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:50.042 13:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:50.042 13:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:50.042 13:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:50.042 13:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:50.042 13:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:50.042 13:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.042 13:42:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.042 13:42:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:50.042 13:42:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.301 13:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:50.301 "name": "raid_bdev1", 00:30:50.301 "uuid": "9ec369cb-89ed-4846-8b0a-ac46f5bf8845", 00:30:50.301 "strip_size_kb": 0, 00:30:50.301 "state": "online", 00:30:50.301 "raid_level": "raid1", 00:30:50.301 "superblock": false, 00:30:50.301 "num_base_bdevs": 4, 00:30:50.301 "num_base_bdevs_discovered": 3, 00:30:50.301 "num_base_bdevs_operational": 3, 00:30:50.301 "process": { 00:30:50.301 "type": "rebuild", 00:30:50.301 "target": "spare", 00:30:50.301 "progress": { 00:30:50.301 "blocks": 34816, 00:30:50.301 "percent": 53 00:30:50.301 } 00:30:50.301 }, 00:30:50.301 "base_bdevs_list": [ 00:30:50.301 { 00:30:50.301 "name": "spare", 00:30:50.301 "uuid": "512f945c-2953-53df-8ee6-b7073a137194", 00:30:50.301 "is_configured": true, 00:30:50.301 "data_offset": 0, 00:30:50.301 "data_size": 65536 00:30:50.301 }, 00:30:50.301 { 00:30:50.301 "name": null, 00:30:50.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:50.301 "is_configured": false, 00:30:50.301 "data_offset": 0, 00:30:50.301 "data_size": 65536 00:30:50.301 }, 00:30:50.301 { 00:30:50.301 "name": "BaseBdev3", 00:30:50.301 "uuid": "ec015e87-eda3-5ffd-a98b-27454a83b9b9", 00:30:50.301 "is_configured": true, 00:30:50.301 "data_offset": 0, 00:30:50.301 "data_size": 65536 00:30:50.301 }, 00:30:50.301 { 00:30:50.301 "name": "BaseBdev4", 00:30:50.301 "uuid": "31d41294-3bfb-5a52-bd40-af2b154abc1a", 00:30:50.301 "is_configured": true, 00:30:50.301 "data_offset": 0, 00:30:50.301 "data_size": 65536 00:30:50.301 } 00:30:50.301 ] 00:30:50.301 }' 00:30:50.301 13:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:50.301 13:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:50.301 13:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:50.301 13:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:50.301 13:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:50.869 [2024-10-28 13:42:04.830018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:30:51.386 85.33 IOPS, 256.00 MiB/s [2024-10-28T13:42:05.546Z] 13:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:51.386 "name": "raid_bdev1", 00:30:51.386 "uuid": "9ec369cb-89ed-4846-8b0a-ac46f5bf8845", 00:30:51.386 "strip_size_kb": 0, 00:30:51.386 "state": "online", 00:30:51.386 "raid_level": "raid1", 00:30:51.386 "superblock": false, 00:30:51.386 "num_base_bdevs": 4, 00:30:51.386 "num_base_bdevs_discovered": 3, 00:30:51.386 "num_base_bdevs_operational": 3, 00:30:51.386 "process": { 00:30:51.386 "type": "rebuild", 00:30:51.386 "target": "spare", 00:30:51.386 "progress": { 00:30:51.386 "blocks": 53248, 00:30:51.386 "percent": 81 00:30:51.386 } 00:30:51.386 }, 00:30:51.386 "base_bdevs_list": [ 00:30:51.386 { 00:30:51.386 "name": "spare", 00:30:51.386 "uuid": "512f945c-2953-53df-8ee6-b7073a137194", 00:30:51.386 "is_configured": true, 00:30:51.386 "data_offset": 0, 00:30:51.386 "data_size": 65536 00:30:51.386 }, 00:30:51.386 { 00:30:51.386 "name": null, 00:30:51.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.386 "is_configured": false, 00:30:51.386 "data_offset": 0, 00:30:51.386 "data_size": 65536 00:30:51.386 }, 00:30:51.386 { 00:30:51.386 "name": "BaseBdev3", 00:30:51.386 "uuid": "ec015e87-eda3-5ffd-a98b-27454a83b9b9", 00:30:51.386 "is_configured": true, 00:30:51.386 "data_offset": 0, 00:30:51.386 "data_size": 65536 00:30:51.386 }, 00:30:51.386 { 00:30:51.386 "name": "BaseBdev4", 00:30:51.386 "uuid": "31d41294-3bfb-5a52-bd40-af2b154abc1a", 00:30:51.386 "is_configured": true, 00:30:51.386 "data_offset": 0, 00:30:51.386 "data_size": 65536 00:30:51.386 } 00:30:51.386 ] 00:30:51.386 }' 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:51.386 [2024-10-28 13:42:05.496684] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:30:51.386 [2024-10-28 13:42:05.497627] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:51.386 13:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:51.644 [2024-10-28 13:42:05.729488] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:30:52.211 78.86 IOPS, 236.57 MiB/s [2024-10-28T13:42:06.371Z] [2024-10-28 13:42:06.188916] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:52.211 [2024-10-28 13:42:06.296930] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:52.211 [2024-10-28 13:42:06.299764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:52.469 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:52.469 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:52.469 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:52.469 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:52.469 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:52.469 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:52.469 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.469 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:52.469 13:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.469 13:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:52.469 13:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.469 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:52.469 "name": "raid_bdev1", 00:30:52.469 "uuid": "9ec369cb-89ed-4846-8b0a-ac46f5bf8845", 00:30:52.469 "strip_size_kb": 0, 00:30:52.469 "state": "online", 00:30:52.469 "raid_level": "raid1", 00:30:52.469 "superblock": false, 00:30:52.469 "num_base_bdevs": 4, 00:30:52.469 "num_base_bdevs_discovered": 3, 00:30:52.469 "num_base_bdevs_operational": 3, 00:30:52.469 "base_bdevs_list": [ 00:30:52.469 { 00:30:52.469 "name": "spare", 00:30:52.469 "uuid": "512f945c-2953-53df-8ee6-b7073a137194", 00:30:52.469 "is_configured": true, 00:30:52.469 "data_offset": 0, 00:30:52.469 "data_size": 65536 00:30:52.469 }, 00:30:52.469 { 00:30:52.469 "name": null, 00:30:52.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.469 "is_configured": false, 00:30:52.469 "data_offset": 0, 00:30:52.469 "data_size": 65536 00:30:52.469 }, 00:30:52.469 { 00:30:52.469 "name": "BaseBdev3", 00:30:52.469 "uuid": "ec015e87-eda3-5ffd-a98b-27454a83b9b9", 00:30:52.469 "is_configured": true, 00:30:52.469 "data_offset": 0, 00:30:52.469 "data_size": 65536 00:30:52.469 }, 00:30:52.469 { 00:30:52.469 "name": "BaseBdev4", 00:30:52.469 "uuid": "31d41294-3bfb-5a52-bd40-af2b154abc1a", 00:30:52.469 "is_configured": true, 00:30:52.469 "data_offset": 0, 00:30:52.469 "data_size": 65536 00:30:52.469 } 00:30:52.469 ] 00:30:52.469 }' 00:30:52.469 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:52.728 "name": "raid_bdev1", 00:30:52.728 "uuid": "9ec369cb-89ed-4846-8b0a-ac46f5bf8845", 00:30:52.728 "strip_size_kb": 0, 00:30:52.728 "state": "online", 00:30:52.728 "raid_level": "raid1", 00:30:52.728 "superblock": false, 00:30:52.728 "num_base_bdevs": 4, 00:30:52.728 "num_base_bdevs_discovered": 3, 00:30:52.728 "num_base_bdevs_operational": 3, 00:30:52.728 "base_bdevs_list": [ 00:30:52.728 { 00:30:52.728 "name": "spare", 00:30:52.728 "uuid": "512f945c-2953-53df-8ee6-b7073a137194", 00:30:52.728 "is_configured": true, 00:30:52.728 "data_offset": 0, 00:30:52.728 "data_size": 65536 00:30:52.728 }, 00:30:52.728 { 00:30:52.728 "name": null, 00:30:52.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.728 "is_configured": false, 00:30:52.728 "data_offset": 0, 00:30:52.728 "data_size": 65536 00:30:52.728 }, 00:30:52.728 { 00:30:52.728 "name": "BaseBdev3", 00:30:52.728 "uuid": "ec015e87-eda3-5ffd-a98b-27454a83b9b9", 00:30:52.728 "is_configured": true, 00:30:52.728 "data_offset": 0, 00:30:52.728 "data_size": 65536 00:30:52.728 }, 00:30:52.728 { 00:30:52.728 "name": "BaseBdev4", 00:30:52.728 "uuid": "31d41294-3bfb-5a52-bd40-af2b154abc1a", 00:30:52.728 "is_configured": true, 00:30:52.728 "data_offset": 0, 00:30:52.728 "data_size": 65536 00:30:52.728 } 00:30:52.728 ] 00:30:52.728 }' 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.728 13:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.986 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:52.986 "name": "raid_bdev1", 00:30:52.986 "uuid": "9ec369cb-89ed-4846-8b0a-ac46f5bf8845", 00:30:52.986 "strip_size_kb": 0, 00:30:52.986 "state": "online", 00:30:52.986 "raid_level": "raid1", 00:30:52.986 "superblock": false, 00:30:52.986 "num_base_bdevs": 4, 00:30:52.986 "num_base_bdevs_discovered": 3, 00:30:52.986 "num_base_bdevs_operational": 3, 00:30:52.986 "base_bdevs_list": [ 00:30:52.986 { 00:30:52.986 "name": "spare", 00:30:52.986 "uuid": "512f945c-2953-53df-8ee6-b7073a137194", 00:30:52.986 "is_configured": true, 00:30:52.986 "data_offset": 0, 00:30:52.986 "data_size": 65536 00:30:52.986 }, 00:30:52.986 { 00:30:52.986 "name": null, 00:30:52.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.986 "is_configured": false, 00:30:52.986 "data_offset": 0, 00:30:52.986 "data_size": 65536 00:30:52.986 }, 00:30:52.986 { 00:30:52.986 "name": "BaseBdev3", 00:30:52.986 "uuid": "ec015e87-eda3-5ffd-a98b-27454a83b9b9", 00:30:52.986 "is_configured": true, 00:30:52.986 "data_offset": 0, 00:30:52.986 "data_size": 65536 00:30:52.986 }, 00:30:52.986 { 00:30:52.986 "name": "BaseBdev4", 00:30:52.986 "uuid": "31d41294-3bfb-5a52-bd40-af2b154abc1a", 00:30:52.986 "is_configured": true, 00:30:52.986 "data_offset": 0, 00:30:52.986 "data_size": 65536 00:30:52.986 } 00:30:52.986 ] 00:30:52.986 }' 00:30:52.987 13:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:52.987 13:42:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:53.245 72.38 IOPS, 217.12 MiB/s [2024-10-28T13:42:07.405Z] 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:53.245 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.245 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:53.245 [2024-10-28 13:42:07.359798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:53.245 [2024-10-28 13:42:07.359841] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:53.504 00:30:53.504 Latency(us) 00:30:53.504 [2024-10-28T13:42:07.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.504 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:53.504 raid_bdev1 : 8.33 70.33 210.98 0.00 0.00 18765.57 310.92 120109.61 00:30:53.504 [2024-10-28T13:42:07.664Z] =================================================================================================================== 00:30:53.504 [2024-10-28T13:42:07.664Z] Total : 70.33 210.98 0.00 0.00 18765.57 310.92 120109.61 00:30:53.504 [2024-10-28 13:42:07.409053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:53.504 [2024-10-28 13:42:07.409130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:53.504 [2024-10-28 13:42:07.409307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:53.504 [2024-10-28 13:42:07.409333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:30:53.504 { 00:30:53.504 "results": [ 00:30:53.504 { 00:30:53.504 "job": "raid_bdev1", 00:30:53.504 "core_mask": "0x1", 00:30:53.504 "workload": "randrw", 00:30:53.504 "percentage": 50, 00:30:53.504 "status": "finished", 00:30:53.504 "queue_depth": 2, 00:30:53.504 "io_size": 3145728, 00:30:53.504 "runtime": 8.332427, 00:30:53.504 "iops": 70.3276488350873, 00:30:53.504 "mibps": 210.9829465052619, 00:30:53.504 "io_failed": 0, 00:30:53.504 "io_timeout": 0, 00:30:53.504 "avg_latency_us": 18765.570462302203, 00:30:53.504 "min_latency_us": 310.9236363636364, 00:30:53.504 "max_latency_us": 120109.61454545455 00:30:53.504 } 00:30:53.504 ], 00:30:53.504 "core_count": 1 00:30:53.504 } 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:53.504 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:30:53.762 /dev/nbd0 00:30:53.762 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:53.763 1+0 records in 00:30:53.763 1+0 records out 00:30:53.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321851 s, 12.7 MB/s 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:53.763 13:42:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:54.107 /dev/nbd1 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:54.107 1+0 records in 00:30:54.107 1+0 records out 00:30:54.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647029 s, 6.3 MB/s 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:54.107 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:54.673 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:54.931 /dev/nbd1 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:54.931 1+0 records in 00:30:54.931 1+0 records out 00:30:54.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503156 s, 8.1 MB/s 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:54.931 13:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:55.190 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 91523 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 91523 ']' 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 91523 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:55.448 13:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91523 00:30:55.448 killing process with pid 91523 00:30:55.448 Received shutdown signal, test time was about 10.499462 seconds 00:30:55.448 00:30:55.448 Latency(us) 00:30:55.448 [2024-10-28T13:42:09.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.449 [2024-10-28T13:42:09.609Z] =================================================================================================================== 00:30:55.449 [2024-10-28T13:42:09.609Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:55.449 13:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:55.449 13:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:55.449 13:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91523' 00:30:55.449 13:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 91523 00:30:55.449 13:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 91523 00:30:55.449 [2024-10-28 13:42:09.570697] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:55.706 [2024-10-28 13:42:09.623388] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:55.963 ************************************ 00:30:55.963 END TEST raid_rebuild_test_io 00:30:55.963 ************************************ 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:30:55.963 00:30:55.963 real 0m12.852s 00:30:55.963 user 0m17.343s 00:30:55.963 sys 0m1.729s 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:55.963 13:42:09 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:30:55.963 13:42:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:30:55.963 13:42:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:55.963 13:42:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:55.963 ************************************ 00:30:55.963 START TEST raid_rebuild_test_sb_io 00:30:55.963 ************************************ 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:30:55.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=91932 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 91932 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 91932 ']' 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:55.963 13:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:55.963 [2024-10-28 13:42:10.058579] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:30:55.963 [2024-10-28 13:42:10.059057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91932 ] 00:30:55.963 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:55.963 Zero copy mechanism will not be used. 00:30:56.221 [2024-10-28 13:42:10.214276] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:56.221 [2024-10-28 13:42:10.239911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.221 [2024-10-28 13:42:10.291378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.221 [2024-10-28 13:42:10.351575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:56.221 [2024-10-28 13:42:10.351874] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.157 BaseBdev1_malloc 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.157 [2024-10-28 13:42:11.084021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:57.157 [2024-10-28 13:42:11.084106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:57.157 [2024-10-28 13:42:11.084178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:57.157 [2024-10-28 13:42:11.084214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:57.157 [2024-10-28 13:42:11.087360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:57.157 [2024-10-28 13:42:11.087412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:57.157 BaseBdev1 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.157 BaseBdev2_malloc 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.157 [2024-10-28 13:42:11.108319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:57.157 [2024-10-28 13:42:11.108389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:57.157 [2024-10-28 13:42:11.108438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:57.157 [2024-10-28 13:42:11.108464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:57.157 [2024-10-28 13:42:11.111503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:57.157 [2024-10-28 13:42:11.111559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:57.157 BaseBdev2 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.157 BaseBdev3_malloc 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.157 [2024-10-28 13:42:11.132817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:57.157 [2024-10-28 13:42:11.132915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:57.157 [2024-10-28 13:42:11.132967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:57.157 [2024-10-28 13:42:11.132995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:57.157 [2024-10-28 13:42:11.135985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:57.157 [2024-10-28 13:42:11.136038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:57.157 BaseBdev3 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.157 BaseBdev4_malloc 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.157 [2024-10-28 13:42:11.168408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:57.157 [2024-10-28 13:42:11.168630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:57.157 [2024-10-28 13:42:11.168845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:57.157 [2024-10-28 13:42:11.169024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:57.157 [2024-10-28 13:42:11.172235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:57.157 [2024-10-28 13:42:11.172411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:57.157 BaseBdev4 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.157 spare_malloc 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.157 spare_delay 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.157 [2024-10-28 13:42:11.200994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:57.157 [2024-10-28 13:42:11.201079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:57.157 [2024-10-28 13:42:11.201117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:57.157 [2024-10-28 13:42:11.201156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:57.157 [2024-10-28 13:42:11.204281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:57.157 [2024-10-28 13:42:11.204334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:57.157 spare 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.157 [2024-10-28 13:42:11.209097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:57.157 [2024-10-28 13:42:11.211680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:57.157 [2024-10-28 13:42:11.211955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:57.157 [2024-10-28 13:42:11.212036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:57.157 [2024-10-28 13:42:11.212314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:30:57.157 [2024-10-28 13:42:11.212340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:57.157 [2024-10-28 13:42:11.212708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:57.157 [2024-10-28 13:42:11.212910] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:30:57.157 [2024-10-28 13:42:11.212925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:30:57.157 [2024-10-28 13:42:11.213187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.157 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.158 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:57.158 "name": "raid_bdev1", 00:30:57.158 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:30:57.158 "strip_size_kb": 0, 00:30:57.158 "state": "online", 00:30:57.158 "raid_level": "raid1", 00:30:57.158 "superblock": true, 00:30:57.158 "num_base_bdevs": 4, 00:30:57.158 "num_base_bdevs_discovered": 4, 00:30:57.158 "num_base_bdevs_operational": 4, 00:30:57.158 "base_bdevs_list": [ 00:30:57.158 { 00:30:57.158 "name": "BaseBdev1", 00:30:57.158 "uuid": "b8adef39-cf8f-5889-bacc-714b21f01d71", 00:30:57.158 "is_configured": true, 00:30:57.158 "data_offset": 2048, 00:30:57.158 "data_size": 63488 00:30:57.158 }, 00:30:57.158 { 00:30:57.158 "name": "BaseBdev2", 00:30:57.158 "uuid": "5b840756-b721-5de8-9286-2a41912de1ce", 00:30:57.158 "is_configured": true, 00:30:57.158 "data_offset": 2048, 00:30:57.158 "data_size": 63488 00:30:57.158 }, 00:30:57.158 { 00:30:57.158 "name": "BaseBdev3", 00:30:57.158 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:30:57.158 "is_configured": true, 00:30:57.158 "data_offset": 2048, 00:30:57.158 "data_size": 63488 00:30:57.158 }, 00:30:57.158 { 00:30:57.158 "name": "BaseBdev4", 00:30:57.158 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:30:57.158 "is_configured": true, 00:30:57.158 "data_offset": 2048, 00:30:57.158 "data_size": 63488 00:30:57.158 } 00:30:57.158 ] 00:30:57.158 }' 00:30:57.158 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:57.158 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.724 [2024-10-28 13:42:11.709768] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.724 [2024-10-28 13:42:11.829344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.724 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.982 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:57.982 "name": "raid_bdev1", 00:30:57.982 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:30:57.982 "strip_size_kb": 0, 00:30:57.982 "state": "online", 00:30:57.982 "raid_level": "raid1", 00:30:57.982 "superblock": true, 00:30:57.982 "num_base_bdevs": 4, 00:30:57.982 "num_base_bdevs_discovered": 3, 00:30:57.982 "num_base_bdevs_operational": 3, 00:30:57.982 "base_bdevs_list": [ 00:30:57.982 { 00:30:57.982 "name": null, 00:30:57.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:57.982 "is_configured": false, 00:30:57.982 "data_offset": 0, 00:30:57.982 "data_size": 63488 00:30:57.982 }, 00:30:57.982 { 00:30:57.982 "name": "BaseBdev2", 00:30:57.982 "uuid": "5b840756-b721-5de8-9286-2a41912de1ce", 00:30:57.982 "is_configured": true, 00:30:57.982 "data_offset": 2048, 00:30:57.982 "data_size": 63488 00:30:57.982 }, 00:30:57.982 { 00:30:57.982 "name": "BaseBdev3", 00:30:57.982 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:30:57.982 "is_configured": true, 00:30:57.982 "data_offset": 2048, 00:30:57.982 "data_size": 63488 00:30:57.982 }, 00:30:57.982 { 00:30:57.982 "name": "BaseBdev4", 00:30:57.982 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:30:57.982 "is_configured": true, 00:30:57.982 "data_offset": 2048, 00:30:57.982 "data_size": 63488 00:30:57.982 } 00:30:57.982 ] 00:30:57.982 }' 00:30:57.982 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:57.982 13:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.982 [2024-10-28 13:42:11.932073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:30:57.982 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:57.982 Zero copy mechanism will not be used. 00:30:57.982 Running I/O for 60 seconds... 00:30:58.240 13:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:58.240 13:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.240 13:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:58.240 [2024-10-28 13:42:12.351620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:58.497 13:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.497 13:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:30:58.497 [2024-10-28 13:42:12.420480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:30:58.497 [2024-10-28 13:42:12.423194] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:58.497 [2024-10-28 13:42:12.535456] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:58.497 [2024-10-28 13:42:12.536096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:58.755 [2024-10-28 13:42:12.768361] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:58.755 [2024-10-28 13:42:12.768732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:59.012 166.00 IOPS, 498.00 MiB/s [2024-10-28T13:42:13.172Z] [2024-10-28 13:42:13.032393] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:59.012 [2024-10-28 13:42:13.033952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:59.270 [2024-10-28 13:42:13.247429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:59.270 [2024-10-28 13:42:13.248295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:59.270 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:59.270 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:59.270 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:59.270 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:59.270 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:59.270 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:59.270 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.270 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.270 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:59.270 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.528 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:59.528 "name": "raid_bdev1", 00:30:59.528 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:30:59.528 "strip_size_kb": 0, 00:30:59.528 "state": "online", 00:30:59.528 "raid_level": "raid1", 00:30:59.528 "superblock": true, 00:30:59.528 "num_base_bdevs": 4, 00:30:59.528 "num_base_bdevs_discovered": 4, 00:30:59.528 "num_base_bdevs_operational": 4, 00:30:59.528 "process": { 00:30:59.528 "type": "rebuild", 00:30:59.528 "target": "spare", 00:30:59.528 "progress": { 00:30:59.528 "blocks": 10240, 00:30:59.528 "percent": 16 00:30:59.528 } 00:30:59.528 }, 00:30:59.528 "base_bdevs_list": [ 00:30:59.528 { 00:30:59.528 "name": "spare", 00:30:59.528 "uuid": "c5f4dc31-107d-549b-aa6a-6e85b4dfdb22", 00:30:59.528 "is_configured": true, 00:30:59.528 "data_offset": 2048, 00:30:59.528 "data_size": 63488 00:30:59.528 }, 00:30:59.528 { 00:30:59.528 "name": "BaseBdev2", 00:30:59.528 "uuid": "5b840756-b721-5de8-9286-2a41912de1ce", 00:30:59.528 "is_configured": true, 00:30:59.528 "data_offset": 2048, 00:30:59.528 "data_size": 63488 00:30:59.528 }, 00:30:59.528 { 00:30:59.528 "name": "BaseBdev3", 00:30:59.528 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:30:59.528 "is_configured": true, 00:30:59.528 "data_offset": 2048, 00:30:59.528 "data_size": 63488 00:30:59.528 }, 00:30:59.528 { 00:30:59.528 "name": "BaseBdev4", 00:30:59.528 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:30:59.528 "is_configured": true, 00:30:59.528 "data_offset": 2048, 00:30:59.528 "data_size": 63488 00:30:59.528 } 00:30:59.528 ] 00:30:59.528 }' 00:30:59.528 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:59.528 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:59.528 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:59.528 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:59.528 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:59.528 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.528 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:59.528 [2024-10-28 13:42:13.572071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:59.529 [2024-10-28 13:42:13.682442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:59.529 [2024-10-28 13:42:13.682752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:59.529 [2024-10-28 13:42:13.684159] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:59.787 [2024-10-28 13:42:13.686438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:59.788 [2024-10-28 13:42:13.686617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:59.788 [2024-10-28 13:42:13.686659] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:59.788 [2024-10-28 13:42:13.704454] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006630 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:59.788 "name": "raid_bdev1", 00:30:59.788 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:30:59.788 "strip_size_kb": 0, 00:30:59.788 "state": "online", 00:30:59.788 "raid_level": "raid1", 00:30:59.788 "superblock": true, 00:30:59.788 "num_base_bdevs": 4, 00:30:59.788 "num_base_bdevs_discovered": 3, 00:30:59.788 "num_base_bdevs_operational": 3, 00:30:59.788 "base_bdevs_list": [ 00:30:59.788 { 00:30:59.788 "name": null, 00:30:59.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.788 "is_configured": false, 00:30:59.788 "data_offset": 0, 00:30:59.788 "data_size": 63488 00:30:59.788 }, 00:30:59.788 { 00:30:59.788 "name": "BaseBdev2", 00:30:59.788 "uuid": "5b840756-b721-5de8-9286-2a41912de1ce", 00:30:59.788 "is_configured": true, 00:30:59.788 "data_offset": 2048, 00:30:59.788 "data_size": 63488 00:30:59.788 }, 00:30:59.788 { 00:30:59.788 "name": "BaseBdev3", 00:30:59.788 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:30:59.788 "is_configured": true, 00:30:59.788 "data_offset": 2048, 00:30:59.788 "data_size": 63488 00:30:59.788 }, 00:30:59.788 { 00:30:59.788 "name": "BaseBdev4", 00:30:59.788 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:30:59.788 "is_configured": true, 00:30:59.788 "data_offset": 2048, 00:30:59.788 "data_size": 63488 00:30:59.788 } 00:30:59.788 ] 00:30:59.788 }' 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:59.788 13:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:00.047 134.50 IOPS, 403.50 MiB/s [2024-10-28T13:42:14.207Z] 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:00.047 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:00.047 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:00.047 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:00.047 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:00.047 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:00.047 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:00.047 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.047 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:00.308 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.308 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:00.308 "name": "raid_bdev1", 00:31:00.308 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:00.308 "strip_size_kb": 0, 00:31:00.308 "state": "online", 00:31:00.308 "raid_level": "raid1", 00:31:00.308 "superblock": true, 00:31:00.308 "num_base_bdevs": 4, 00:31:00.308 "num_base_bdevs_discovered": 3, 00:31:00.308 "num_base_bdevs_operational": 3, 00:31:00.308 "base_bdevs_list": [ 00:31:00.308 { 00:31:00.308 "name": null, 00:31:00.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.308 "is_configured": false, 00:31:00.308 "data_offset": 0, 00:31:00.308 "data_size": 63488 00:31:00.308 }, 00:31:00.308 { 00:31:00.308 "name": "BaseBdev2", 00:31:00.308 "uuid": "5b840756-b721-5de8-9286-2a41912de1ce", 00:31:00.308 "is_configured": true, 00:31:00.308 "data_offset": 2048, 00:31:00.308 "data_size": 63488 00:31:00.308 }, 00:31:00.308 { 00:31:00.308 "name": "BaseBdev3", 00:31:00.308 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:00.308 "is_configured": true, 00:31:00.308 "data_offset": 2048, 00:31:00.308 "data_size": 63488 00:31:00.308 }, 00:31:00.308 { 00:31:00.308 "name": "BaseBdev4", 00:31:00.308 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:00.308 "is_configured": true, 00:31:00.308 "data_offset": 2048, 00:31:00.308 "data_size": 63488 00:31:00.308 } 00:31:00.308 ] 00:31:00.308 }' 00:31:00.308 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:00.308 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:00.308 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:00.308 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:00.308 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:00.308 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.308 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:00.308 [2024-10-28 13:42:14.367282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:00.308 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.308 13:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:31:00.567 [2024-10-28 13:42:14.466759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:31:00.567 [2024-10-28 13:42:14.469617] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:00.567 [2024-10-28 13:42:14.582207] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:00.567 [2024-10-28 13:42:14.582863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:00.825 [2024-10-28 13:42:14.788686] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:00.825 [2024-10-28 13:42:14.789750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:01.083 129.33 IOPS, 388.00 MiB/s [2024-10-28T13:42:15.243Z] [2024-10-28 13:42:15.115402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:01.342 [2024-10-28 13:42:15.340100] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:01.342 [2024-10-28 13:42:15.341263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:01.342 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:01.342 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:01.342 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:01.342 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:01.342 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:01.342 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:01.342 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.342 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.342 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:01.342 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.342 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:01.342 "name": "raid_bdev1", 00:31:01.342 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:01.342 "strip_size_kb": 0, 00:31:01.342 "state": "online", 00:31:01.342 "raid_level": "raid1", 00:31:01.342 "superblock": true, 00:31:01.342 "num_base_bdevs": 4, 00:31:01.342 "num_base_bdevs_discovered": 4, 00:31:01.342 "num_base_bdevs_operational": 4, 00:31:01.342 "process": { 00:31:01.342 "type": "rebuild", 00:31:01.342 "target": "spare", 00:31:01.342 "progress": { 00:31:01.342 "blocks": 10240, 00:31:01.342 "percent": 16 00:31:01.342 } 00:31:01.342 }, 00:31:01.342 "base_bdevs_list": [ 00:31:01.342 { 00:31:01.342 "name": "spare", 00:31:01.342 "uuid": "c5f4dc31-107d-549b-aa6a-6e85b4dfdb22", 00:31:01.342 "is_configured": true, 00:31:01.342 "data_offset": 2048, 00:31:01.342 "data_size": 63488 00:31:01.342 }, 00:31:01.342 { 00:31:01.342 "name": "BaseBdev2", 00:31:01.342 "uuid": "5b840756-b721-5de8-9286-2a41912de1ce", 00:31:01.343 "is_configured": true, 00:31:01.343 "data_offset": 2048, 00:31:01.343 "data_size": 63488 00:31:01.343 }, 00:31:01.343 { 00:31:01.343 "name": "BaseBdev3", 00:31:01.343 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:01.343 "is_configured": true, 00:31:01.343 "data_offset": 2048, 00:31:01.343 "data_size": 63488 00:31:01.343 }, 00:31:01.343 { 00:31:01.343 "name": "BaseBdev4", 00:31:01.343 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:01.343 "is_configured": true, 00:31:01.343 "data_offset": 2048, 00:31:01.343 "data_size": 63488 00:31:01.343 } 00:31:01.343 ] 00:31:01.343 }' 00:31:01.343 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:01.601 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:01.601 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:01.601 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:01.601 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:31:01.601 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:31:01.601 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:31:01.601 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:31:01.601 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:31:01.601 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:31:01.601 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:31:01.601 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.601 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:01.601 [2024-10-28 13:42:15.616457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:01.601 [2024-10-28 13:42:15.696827] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:01.601 [2024-10-28 13:42:15.698606] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:01.859 [2024-10-28 13:42:15.909398] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006630 00:31:01.859 [2024-10-28 13:42:15.909701] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000067d0 00:31:01.859 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.859 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:31:01.859 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:31:01.859 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:01.859 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:01.859 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:01.859 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:01.859 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:01.859 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:01.859 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.859 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.859 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:01.859 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.859 107.00 IOPS, 321.00 MiB/s [2024-10-28T13:42:16.019Z] 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:01.859 "name": "raid_bdev1", 00:31:01.859 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:01.859 "strip_size_kb": 0, 00:31:01.859 "state": "online", 00:31:01.859 "raid_level": "raid1", 00:31:01.859 "superblock": true, 00:31:01.859 "num_base_bdevs": 4, 00:31:01.859 "num_base_bdevs_discovered": 3, 00:31:01.859 "num_base_bdevs_operational": 3, 00:31:01.859 "process": { 00:31:01.859 "type": "rebuild", 00:31:01.859 "target": "spare", 00:31:01.859 "progress": { 00:31:01.859 "blocks": 14336, 00:31:01.859 "percent": 22 00:31:01.859 } 00:31:01.859 }, 00:31:01.859 "base_bdevs_list": [ 00:31:01.859 { 00:31:01.859 "name": "spare", 00:31:01.859 "uuid": "c5f4dc31-107d-549b-aa6a-6e85b4dfdb22", 00:31:01.859 "is_configured": true, 00:31:01.859 "data_offset": 2048, 00:31:01.859 "data_size": 63488 00:31:01.859 }, 00:31:01.859 { 00:31:01.859 "name": null, 00:31:01.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.859 "is_configured": false, 00:31:01.859 "data_offset": 0, 00:31:01.859 "data_size": 63488 00:31:01.859 }, 00:31:01.859 { 00:31:01.859 "name": "BaseBdev3", 00:31:01.859 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:01.859 "is_configured": true, 00:31:01.859 "data_offset": 2048, 00:31:01.859 "data_size": 63488 00:31:01.859 }, 00:31:01.859 { 00:31:01.859 "name": "BaseBdev4", 00:31:01.859 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:01.859 "is_configured": true, 00:31:01.859 "data_offset": 2048, 00:31:01.859 "data_size": 63488 00:31:01.859 } 00:31:01.859 ] 00:31:01.859 }' 00:31:01.859 13:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=470 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:02.119 "name": "raid_bdev1", 00:31:02.119 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:02.119 "strip_size_kb": 0, 00:31:02.119 "state": "online", 00:31:02.119 "raid_level": "raid1", 00:31:02.119 "superblock": true, 00:31:02.119 "num_base_bdevs": 4, 00:31:02.119 "num_base_bdevs_discovered": 3, 00:31:02.119 "num_base_bdevs_operational": 3, 00:31:02.119 "process": { 00:31:02.119 "type": "rebuild", 00:31:02.119 "target": "spare", 00:31:02.119 "progress": { 00:31:02.119 "blocks": 16384, 00:31:02.119 "percent": 25 00:31:02.119 } 00:31:02.119 }, 00:31:02.119 "base_bdevs_list": [ 00:31:02.119 { 00:31:02.119 "name": "spare", 00:31:02.119 "uuid": "c5f4dc31-107d-549b-aa6a-6e85b4dfdb22", 00:31:02.119 "is_configured": true, 00:31:02.119 "data_offset": 2048, 00:31:02.119 "data_size": 63488 00:31:02.119 }, 00:31:02.119 { 00:31:02.119 "name": null, 00:31:02.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:02.119 "is_configured": false, 00:31:02.119 "data_offset": 0, 00:31:02.119 "data_size": 63488 00:31:02.119 }, 00:31:02.119 { 00:31:02.119 "name": "BaseBdev3", 00:31:02.119 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:02.119 "is_configured": true, 00:31:02.119 "data_offset": 2048, 00:31:02.119 "data_size": 63488 00:31:02.119 }, 00:31:02.119 { 00:31:02.119 "name": "BaseBdev4", 00:31:02.119 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:02.119 "is_configured": true, 00:31:02.119 "data_offset": 2048, 00:31:02.119 "data_size": 63488 00:31:02.119 } 00:31:02.119 ] 00:31:02.119 }' 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:02.119 13:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:02.378 [2024-10-28 13:42:16.285072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:02.378 [2024-10-28 13:42:16.513031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:31:02.378 [2024-10-28 13:42:16.513907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:31:02.945 [2024-10-28 13:42:16.881323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:31:02.945 [2024-10-28 13:42:16.892311] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:31:03.204 95.20 IOPS, 285.60 MiB/s [2024-10-28T13:42:17.364Z] [2024-10-28 13:42:17.115740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:03.204 "name": "raid_bdev1", 00:31:03.204 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:03.204 "strip_size_kb": 0, 00:31:03.204 "state": "online", 00:31:03.204 "raid_level": "raid1", 00:31:03.204 "superblock": true, 00:31:03.204 "num_base_bdevs": 4, 00:31:03.204 "num_base_bdevs_discovered": 3, 00:31:03.204 "num_base_bdevs_operational": 3, 00:31:03.204 "process": { 00:31:03.204 "type": "rebuild", 00:31:03.204 "target": "spare", 00:31:03.204 "progress": { 00:31:03.204 "blocks": 28672, 00:31:03.204 "percent": 45 00:31:03.204 } 00:31:03.204 }, 00:31:03.204 "base_bdevs_list": [ 00:31:03.204 { 00:31:03.204 "name": "spare", 00:31:03.204 "uuid": "c5f4dc31-107d-549b-aa6a-6e85b4dfdb22", 00:31:03.204 "is_configured": true, 00:31:03.204 "data_offset": 2048, 00:31:03.204 "data_size": 63488 00:31:03.204 }, 00:31:03.204 { 00:31:03.204 "name": null, 00:31:03.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.204 "is_configured": false, 00:31:03.204 "data_offset": 0, 00:31:03.204 "data_size": 63488 00:31:03.204 }, 00:31:03.204 { 00:31:03.204 "name": "BaseBdev3", 00:31:03.204 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:03.204 "is_configured": true, 00:31:03.204 "data_offset": 2048, 00:31:03.204 "data_size": 63488 00:31:03.204 }, 00:31:03.204 { 00:31:03.204 "name": "BaseBdev4", 00:31:03.204 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:03.204 "is_configured": true, 00:31:03.204 "data_offset": 2048, 00:31:03.204 "data_size": 63488 00:31:03.204 } 00:31:03.204 ] 00:31:03.204 }' 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:03.204 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:03.462 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:03.462 13:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:03.721 [2024-10-28 13:42:17.716803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:31:04.242 87.33 IOPS, 262.00 MiB/s [2024-10-28T13:42:18.402Z] [2024-10-28 13:42:18.168798] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:31:04.242 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:04.242 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:04.242 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:04.242 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:04.242 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:04.242 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:04.242 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:04.242 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.242 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.242 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:04.501 [2024-10-28 13:42:18.401151] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:31:04.501 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.501 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:04.501 "name": "raid_bdev1", 00:31:04.501 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:04.501 "strip_size_kb": 0, 00:31:04.501 "state": "online", 00:31:04.501 "raid_level": "raid1", 00:31:04.501 "superblock": true, 00:31:04.501 "num_base_bdevs": 4, 00:31:04.501 "num_base_bdevs_discovered": 3, 00:31:04.501 "num_base_bdevs_operational": 3, 00:31:04.501 "process": { 00:31:04.501 "type": "rebuild", 00:31:04.501 "target": "spare", 00:31:04.501 "progress": { 00:31:04.501 "blocks": 49152, 00:31:04.501 "percent": 77 00:31:04.501 } 00:31:04.501 }, 00:31:04.501 "base_bdevs_list": [ 00:31:04.501 { 00:31:04.501 "name": "spare", 00:31:04.501 "uuid": "c5f4dc31-107d-549b-aa6a-6e85b4dfdb22", 00:31:04.501 "is_configured": true, 00:31:04.501 "data_offset": 2048, 00:31:04.501 "data_size": 63488 00:31:04.501 }, 00:31:04.501 { 00:31:04.501 "name": null, 00:31:04.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.501 "is_configured": false, 00:31:04.501 "data_offset": 0, 00:31:04.501 "data_size": 63488 00:31:04.501 }, 00:31:04.501 { 00:31:04.501 "name": "BaseBdev3", 00:31:04.501 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:04.501 "is_configured": true, 00:31:04.501 "data_offset": 2048, 00:31:04.501 "data_size": 63488 00:31:04.501 }, 00:31:04.501 { 00:31:04.501 "name": "BaseBdev4", 00:31:04.501 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:04.501 "is_configured": true, 00:31:04.501 "data_offset": 2048, 00:31:04.501 "data_size": 63488 00:31:04.501 } 00:31:04.501 ] 00:31:04.501 }' 00:31:04.501 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:04.501 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:04.501 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:04.501 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:04.501 13:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:04.760 [2024-10-28 13:42:18.750212] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:31:05.018 79.57 IOPS, 238.71 MiB/s [2024-10-28T13:42:19.178Z] [2024-10-28 13:42:18.973998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:31:05.276 [2024-10-28 13:42:19.206288] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:05.276 [2024-10-28 13:42:19.314204] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:05.276 [2024-10-28 13:42:19.317837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:05.534 "name": "raid_bdev1", 00:31:05.534 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:05.534 "strip_size_kb": 0, 00:31:05.534 "state": "online", 00:31:05.534 "raid_level": "raid1", 00:31:05.534 "superblock": true, 00:31:05.534 "num_base_bdevs": 4, 00:31:05.534 "num_base_bdevs_discovered": 3, 00:31:05.534 "num_base_bdevs_operational": 3, 00:31:05.534 "base_bdevs_list": [ 00:31:05.534 { 00:31:05.534 "name": "spare", 00:31:05.534 "uuid": "c5f4dc31-107d-549b-aa6a-6e85b4dfdb22", 00:31:05.534 "is_configured": true, 00:31:05.534 "data_offset": 2048, 00:31:05.534 "data_size": 63488 00:31:05.534 }, 00:31:05.534 { 00:31:05.534 "name": null, 00:31:05.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.534 "is_configured": false, 00:31:05.534 "data_offset": 0, 00:31:05.534 "data_size": 63488 00:31:05.534 }, 00:31:05.534 { 00:31:05.534 "name": "BaseBdev3", 00:31:05.534 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:05.534 "is_configured": true, 00:31:05.534 "data_offset": 2048, 00:31:05.534 "data_size": 63488 00:31:05.534 }, 00:31:05.534 { 00:31:05.534 "name": "BaseBdev4", 00:31:05.534 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:05.534 "is_configured": true, 00:31:05.534 "data_offset": 2048, 00:31:05.534 "data_size": 63488 00:31:05.534 } 00:31:05.534 ] 00:31:05.534 }' 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:05.534 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:05.793 "name": "raid_bdev1", 00:31:05.793 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:05.793 "strip_size_kb": 0, 00:31:05.793 "state": "online", 00:31:05.793 "raid_level": "raid1", 00:31:05.793 "superblock": true, 00:31:05.793 "num_base_bdevs": 4, 00:31:05.793 "num_base_bdevs_discovered": 3, 00:31:05.793 "num_base_bdevs_operational": 3, 00:31:05.793 "base_bdevs_list": [ 00:31:05.793 { 00:31:05.793 "name": "spare", 00:31:05.793 "uuid": "c5f4dc31-107d-549b-aa6a-6e85b4dfdb22", 00:31:05.793 "is_configured": true, 00:31:05.793 "data_offset": 2048, 00:31:05.793 "data_size": 63488 00:31:05.793 }, 00:31:05.793 { 00:31:05.793 "name": null, 00:31:05.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.793 "is_configured": false, 00:31:05.793 "data_offset": 0, 00:31:05.793 "data_size": 63488 00:31:05.793 }, 00:31:05.793 { 00:31:05.793 "name": "BaseBdev3", 00:31:05.793 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:05.793 "is_configured": true, 00:31:05.793 "data_offset": 2048, 00:31:05.793 "data_size": 63488 00:31:05.793 }, 00:31:05.793 { 00:31:05.793 "name": "BaseBdev4", 00:31:05.793 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:05.793 "is_configured": true, 00:31:05.793 "data_offset": 2048, 00:31:05.793 "data_size": 63488 00:31:05.793 } 00:31:05.793 ] 00:31:05.793 }' 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:05.793 "name": "raid_bdev1", 00:31:05.793 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:05.793 "strip_size_kb": 0, 00:31:05.793 "state": "online", 00:31:05.793 "raid_level": "raid1", 00:31:05.793 "superblock": true, 00:31:05.793 "num_base_bdevs": 4, 00:31:05.793 "num_base_bdevs_discovered": 3, 00:31:05.793 "num_base_bdevs_operational": 3, 00:31:05.793 "base_bdevs_list": [ 00:31:05.793 { 00:31:05.793 "name": "spare", 00:31:05.793 "uuid": "c5f4dc31-107d-549b-aa6a-6e85b4dfdb22", 00:31:05.793 "is_configured": true, 00:31:05.793 "data_offset": 2048, 00:31:05.793 "data_size": 63488 00:31:05.793 }, 00:31:05.793 { 00:31:05.793 "name": null, 00:31:05.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.793 "is_configured": false, 00:31:05.793 "data_offset": 0, 00:31:05.793 "data_size": 63488 00:31:05.793 }, 00:31:05.793 { 00:31:05.793 "name": "BaseBdev3", 00:31:05.793 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:05.793 "is_configured": true, 00:31:05.793 "data_offset": 2048, 00:31:05.793 "data_size": 63488 00:31:05.793 }, 00:31:05.793 { 00:31:05.793 "name": "BaseBdev4", 00:31:05.793 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:05.793 "is_configured": true, 00:31:05.793 "data_offset": 2048, 00:31:05.793 "data_size": 63488 00:31:05.793 } 00:31:05.793 ] 00:31:05.793 }' 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:05.793 13:42:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:06.309 73.50 IOPS, 220.50 MiB/s [2024-10-28T13:42:20.469Z] 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:06.309 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.309 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:06.309 [2024-10-28 13:42:20.383780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:06.309 [2024-10-28 13:42:20.383832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:06.309 00:31:06.309 Latency(us) 00:31:06.309 [2024-10-28T13:42:20.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.309 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:31:06.309 raid_bdev1 : 8.49 72.06 216.19 0.00 0.00 19177.77 286.72 118203.11 00:31:06.309 [2024-10-28T13:42:20.469Z] =================================================================================================================== 00:31:06.309 [2024-10-28T13:42:20.469Z] Total : 72.06 216.19 0.00 0.00 19177.77 286.72 118203.11 00:31:06.309 { 00:31:06.309 "results": [ 00:31:06.309 { 00:31:06.309 "job": "raid_bdev1", 00:31:06.309 "core_mask": "0x1", 00:31:06.309 "workload": "randrw", 00:31:06.309 "percentage": 50, 00:31:06.309 "status": "finished", 00:31:06.309 "queue_depth": 2, 00:31:06.309 "io_size": 3145728, 00:31:06.309 "runtime": 8.492673, 00:31:06.309 "iops": 72.06211754532407, 00:31:06.309 "mibps": 216.1863526359722, 00:31:06.309 "io_failed": 0, 00:31:06.309 "io_timeout": 0, 00:31:06.309 "avg_latency_us": 19177.766179441474, 00:31:06.309 "min_latency_us": 286.72, 00:31:06.309 "max_latency_us": 118203.11272727273 00:31:06.309 } 00:31:06.309 ], 00:31:06.309 "core_count": 1 00:31:06.309 } 00:31:06.309 [2024-10-28 13:42:20.433227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:06.309 [2024-10-28 13:42:20.433296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:06.309 [2024-10-28 13:42:20.433460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:06.309 [2024-10-28 13:42:20.433485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:31:06.309 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.309 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:06.309 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:31:06.309 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.309 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:06.309 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.568 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:31:06.568 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:06.568 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:31:06.568 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:31:06.568 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:06.568 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:31:06.568 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:06.568 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:06.568 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:06.568 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:06.568 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:06.568 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:06.568 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:31:06.827 /dev/nbd0 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:06.828 1+0 records in 00:31:06.828 1+0 records out 00:31:06.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00264179 s, 1.6 MB/s 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:06.828 13:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:31:07.087 /dev/nbd1 00:31:07.087 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:07.087 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:07.088 1+0 records in 00:31:07.088 1+0 records out 00:31:07.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357132 s, 11.5 MB/s 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:07.088 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:07.346 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:31:07.346 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:07.347 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:31:07.347 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:07.347 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:07.347 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:07.347 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:07.605 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:31:07.865 /dev/nbd1 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:07.865 1+0 records in 00:31:07.865 1+0 records out 00:31:07.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364677 s, 11.2 MB/s 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:07.865 13:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:08.211 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:08.469 [2024-10-28 13:42:22.463653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:08.469 [2024-10-28 13:42:22.463881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:08.469 [2024-10-28 13:42:22.464032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:31:08.469 [2024-10-28 13:42:22.464174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:08.469 [2024-10-28 13:42:22.467310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:08.469 [2024-10-28 13:42:22.467500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:08.469 [2024-10-28 13:42:22.467810] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:08.469 [2024-10-28 13:42:22.467982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:08.469 [2024-10-28 13:42:22.468210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:08.469 [2024-10-28 13:42:22.468352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:08.469 spare 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.469 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:08.469 [2024-10-28 13:42:22.568490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:08.469 [2024-10-28 13:42:22.568556] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:08.469 [2024-10-28 13:42:22.568990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037570 00:31:08.469 [2024-10-28 13:42:22.569256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:08.469 [2024-10-28 13:42:22.569274] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:08.469 [2024-10-28 13:42:22.569489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:08.470 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.729 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:08.729 "name": "raid_bdev1", 00:31:08.729 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:08.729 "strip_size_kb": 0, 00:31:08.729 "state": "online", 00:31:08.729 "raid_level": "raid1", 00:31:08.729 "superblock": true, 00:31:08.729 "num_base_bdevs": 4, 00:31:08.729 "num_base_bdevs_discovered": 3, 00:31:08.729 "num_base_bdevs_operational": 3, 00:31:08.729 "base_bdevs_list": [ 00:31:08.729 { 00:31:08.729 "name": "spare", 00:31:08.729 "uuid": "c5f4dc31-107d-549b-aa6a-6e85b4dfdb22", 00:31:08.729 "is_configured": true, 00:31:08.729 "data_offset": 2048, 00:31:08.729 "data_size": 63488 00:31:08.729 }, 00:31:08.729 { 00:31:08.729 "name": null, 00:31:08.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:08.729 "is_configured": false, 00:31:08.729 "data_offset": 2048, 00:31:08.729 "data_size": 63488 00:31:08.729 }, 00:31:08.729 { 00:31:08.729 "name": "BaseBdev3", 00:31:08.729 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:08.729 "is_configured": true, 00:31:08.729 "data_offset": 2048, 00:31:08.729 "data_size": 63488 00:31:08.729 }, 00:31:08.729 { 00:31:08.729 "name": "BaseBdev4", 00:31:08.729 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:08.729 "is_configured": true, 00:31:08.729 "data_offset": 2048, 00:31:08.729 "data_size": 63488 00:31:08.729 } 00:31:08.729 ] 00:31:08.729 }' 00:31:08.729 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:08.729 13:42:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:08.987 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:08.987 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:08.987 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:08.987 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:08.987 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:08.987 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:08.987 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:08.987 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.987 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:08.987 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.987 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:08.987 "name": "raid_bdev1", 00:31:08.987 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:08.987 "strip_size_kb": 0, 00:31:08.987 "state": "online", 00:31:08.987 "raid_level": "raid1", 00:31:08.987 "superblock": true, 00:31:08.987 "num_base_bdevs": 4, 00:31:08.987 "num_base_bdevs_discovered": 3, 00:31:08.987 "num_base_bdevs_operational": 3, 00:31:08.987 "base_bdevs_list": [ 00:31:08.987 { 00:31:08.987 "name": "spare", 00:31:08.987 "uuid": "c5f4dc31-107d-549b-aa6a-6e85b4dfdb22", 00:31:08.987 "is_configured": true, 00:31:08.987 "data_offset": 2048, 00:31:08.987 "data_size": 63488 00:31:08.987 }, 00:31:08.987 { 00:31:08.987 "name": null, 00:31:08.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:08.987 "is_configured": false, 00:31:08.988 "data_offset": 2048, 00:31:08.988 "data_size": 63488 00:31:08.988 }, 00:31:08.988 { 00:31:08.988 "name": "BaseBdev3", 00:31:08.988 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:08.988 "is_configured": true, 00:31:08.988 "data_offset": 2048, 00:31:08.988 "data_size": 63488 00:31:08.988 }, 00:31:08.988 { 00:31:08.988 "name": "BaseBdev4", 00:31:08.988 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:08.988 "is_configured": true, 00:31:08.988 "data_offset": 2048, 00:31:08.988 "data_size": 63488 00:31:08.988 } 00:31:08.988 ] 00:31:08.988 }' 00:31:08.988 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:09.246 [2024-10-28 13:42:23.304283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:09.246 "name": "raid_bdev1", 00:31:09.246 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:09.246 "strip_size_kb": 0, 00:31:09.246 "state": "online", 00:31:09.246 "raid_level": "raid1", 00:31:09.246 "superblock": true, 00:31:09.246 "num_base_bdevs": 4, 00:31:09.246 "num_base_bdevs_discovered": 2, 00:31:09.246 "num_base_bdevs_operational": 2, 00:31:09.246 "base_bdevs_list": [ 00:31:09.246 { 00:31:09.246 "name": null, 00:31:09.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.246 "is_configured": false, 00:31:09.246 "data_offset": 0, 00:31:09.246 "data_size": 63488 00:31:09.246 }, 00:31:09.246 { 00:31:09.246 "name": null, 00:31:09.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.246 "is_configured": false, 00:31:09.246 "data_offset": 2048, 00:31:09.246 "data_size": 63488 00:31:09.246 }, 00:31:09.246 { 00:31:09.246 "name": "BaseBdev3", 00:31:09.246 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:09.246 "is_configured": true, 00:31:09.246 "data_offset": 2048, 00:31:09.246 "data_size": 63488 00:31:09.246 }, 00:31:09.246 { 00:31:09.246 "name": "BaseBdev4", 00:31:09.246 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:09.246 "is_configured": true, 00:31:09.246 "data_offset": 2048, 00:31:09.246 "data_size": 63488 00:31:09.246 } 00:31:09.246 ] 00:31:09.246 }' 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:09.246 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:09.814 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:09.814 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.814 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:09.814 [2024-10-28 13:42:23.856585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:09.814 [2024-10-28 13:42:23.856861] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:31:09.814 [2024-10-28 13:42:23.856886] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:09.814 [2024-10-28 13:42:23.856957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:09.814 [2024-10-28 13:42:23.863217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037640 00:31:09.814 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.814 13:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:31:09.814 [2024-10-28 13:42:23.865995] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:10.752 13:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:10.752 13:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:10.752 13:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:10.752 13:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:10.752 13:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:10.752 13:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:10.752 13:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.752 13:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.752 13:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:10.752 13:42:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.011 13:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:11.011 "name": "raid_bdev1", 00:31:11.011 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:11.011 "strip_size_kb": 0, 00:31:11.011 "state": "online", 00:31:11.011 "raid_level": "raid1", 00:31:11.011 "superblock": true, 00:31:11.011 "num_base_bdevs": 4, 00:31:11.011 "num_base_bdevs_discovered": 3, 00:31:11.011 "num_base_bdevs_operational": 3, 00:31:11.011 "process": { 00:31:11.011 "type": "rebuild", 00:31:11.011 "target": "spare", 00:31:11.011 "progress": { 00:31:11.011 "blocks": 20480, 00:31:11.011 "percent": 32 00:31:11.011 } 00:31:11.011 }, 00:31:11.011 "base_bdevs_list": [ 00:31:11.011 { 00:31:11.011 "name": "spare", 00:31:11.011 "uuid": "c5f4dc31-107d-549b-aa6a-6e85b4dfdb22", 00:31:11.011 "is_configured": true, 00:31:11.011 "data_offset": 2048, 00:31:11.011 "data_size": 63488 00:31:11.011 }, 00:31:11.011 { 00:31:11.011 "name": null, 00:31:11.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:11.011 "is_configured": false, 00:31:11.011 "data_offset": 2048, 00:31:11.011 "data_size": 63488 00:31:11.011 }, 00:31:11.011 { 00:31:11.011 "name": "BaseBdev3", 00:31:11.011 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:11.011 "is_configured": true, 00:31:11.011 "data_offset": 2048, 00:31:11.011 "data_size": 63488 00:31:11.011 }, 00:31:11.011 { 00:31:11.011 "name": "BaseBdev4", 00:31:11.011 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:11.011 "is_configured": true, 00:31:11.011 "data_offset": 2048, 00:31:11.011 "data_size": 63488 00:31:11.011 } 00:31:11.011 ] 00:31:11.011 }' 00:31:11.011 13:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:11.011 13:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:11.011 13:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:11.011 [2024-10-28 13:42:25.039885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:11.011 [2024-10-28 13:42:25.075063] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:11.011 [2024-10-28 13:42:25.075206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:11.011 [2024-10-28 13:42:25.075235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:11.011 [2024-10-28 13:42:25.075251] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.011 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:11.011 "name": "raid_bdev1", 00:31:11.011 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:11.011 "strip_size_kb": 0, 00:31:11.011 "state": "online", 00:31:11.011 "raid_level": "raid1", 00:31:11.011 "superblock": true, 00:31:11.011 "num_base_bdevs": 4, 00:31:11.011 "num_base_bdevs_discovered": 2, 00:31:11.011 "num_base_bdevs_operational": 2, 00:31:11.011 "base_bdevs_list": [ 00:31:11.011 { 00:31:11.011 "name": null, 00:31:11.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:11.011 "is_configured": false, 00:31:11.011 "data_offset": 0, 00:31:11.011 "data_size": 63488 00:31:11.011 }, 00:31:11.012 { 00:31:11.012 "name": null, 00:31:11.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:11.012 "is_configured": false, 00:31:11.012 "data_offset": 2048, 00:31:11.012 "data_size": 63488 00:31:11.012 }, 00:31:11.012 { 00:31:11.012 "name": "BaseBdev3", 00:31:11.012 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:11.012 "is_configured": true, 00:31:11.012 "data_offset": 2048, 00:31:11.012 "data_size": 63488 00:31:11.012 }, 00:31:11.012 { 00:31:11.012 "name": "BaseBdev4", 00:31:11.012 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:11.012 "is_configured": true, 00:31:11.012 "data_offset": 2048, 00:31:11.012 "data_size": 63488 00:31:11.012 } 00:31:11.012 ] 00:31:11.012 }' 00:31:11.012 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:11.012 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:11.580 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:11.580 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.580 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:11.580 [2024-10-28 13:42:25.597364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:11.581 [2024-10-28 13:42:25.597450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:11.581 [2024-10-28 13:42:25.597488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:31:11.581 [2024-10-28 13:42:25.597506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:11.581 [2024-10-28 13:42:25.598061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:11.581 [2024-10-28 13:42:25.598103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:11.581 [2024-10-28 13:42:25.598238] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:11.581 [2024-10-28 13:42:25.598266] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:31:11.581 [2024-10-28 13:42:25.598279] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:11.581 [2024-10-28 13:42:25.598316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:11.581 [2024-10-28 13:42:25.604423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037710 00:31:11.581 spare 00:31:11.581 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.581 13:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:31:11.581 [2024-10-28 13:42:25.607360] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:12.518 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:12.518 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:12.518 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:12.518 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:12.518 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:12.518 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:12.518 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:12.518 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.518 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:12.518 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.518 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:12.518 "name": "raid_bdev1", 00:31:12.518 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:12.518 "strip_size_kb": 0, 00:31:12.518 "state": "online", 00:31:12.518 "raid_level": "raid1", 00:31:12.518 "superblock": true, 00:31:12.518 "num_base_bdevs": 4, 00:31:12.518 "num_base_bdevs_discovered": 3, 00:31:12.518 "num_base_bdevs_operational": 3, 00:31:12.518 "process": { 00:31:12.518 "type": "rebuild", 00:31:12.518 "target": "spare", 00:31:12.518 "progress": { 00:31:12.518 "blocks": 20480, 00:31:12.518 "percent": 32 00:31:12.518 } 00:31:12.518 }, 00:31:12.518 "base_bdevs_list": [ 00:31:12.518 { 00:31:12.518 "name": "spare", 00:31:12.518 "uuid": "c5f4dc31-107d-549b-aa6a-6e85b4dfdb22", 00:31:12.518 "is_configured": true, 00:31:12.518 "data_offset": 2048, 00:31:12.518 "data_size": 63488 00:31:12.518 }, 00:31:12.518 { 00:31:12.518 "name": null, 00:31:12.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:12.518 "is_configured": false, 00:31:12.518 "data_offset": 2048, 00:31:12.518 "data_size": 63488 00:31:12.518 }, 00:31:12.518 { 00:31:12.518 "name": "BaseBdev3", 00:31:12.518 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:12.518 "is_configured": true, 00:31:12.518 "data_offset": 2048, 00:31:12.518 "data_size": 63488 00:31:12.518 }, 00:31:12.518 { 00:31:12.518 "name": "BaseBdev4", 00:31:12.518 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:12.518 "is_configured": true, 00:31:12.518 "data_offset": 2048, 00:31:12.518 "data_size": 63488 00:31:12.518 } 00:31:12.518 ] 00:31:12.518 }' 00:31:12.518 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:12.777 [2024-10-28 13:42:26.765012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:12.777 [2024-10-28 13:42:26.816253] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:12.777 [2024-10-28 13:42:26.816369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:12.777 [2024-10-28 13:42:26.816402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:12.777 [2024-10-28 13:42:26.816414] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:12.777 "name": "raid_bdev1", 00:31:12.777 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:12.777 "strip_size_kb": 0, 00:31:12.777 "state": "online", 00:31:12.777 "raid_level": "raid1", 00:31:12.777 "superblock": true, 00:31:12.777 "num_base_bdevs": 4, 00:31:12.777 "num_base_bdevs_discovered": 2, 00:31:12.777 "num_base_bdevs_operational": 2, 00:31:12.777 "base_bdevs_list": [ 00:31:12.777 { 00:31:12.777 "name": null, 00:31:12.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:12.777 "is_configured": false, 00:31:12.777 "data_offset": 0, 00:31:12.777 "data_size": 63488 00:31:12.777 }, 00:31:12.777 { 00:31:12.777 "name": null, 00:31:12.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:12.777 "is_configured": false, 00:31:12.777 "data_offset": 2048, 00:31:12.777 "data_size": 63488 00:31:12.777 }, 00:31:12.777 { 00:31:12.777 "name": "BaseBdev3", 00:31:12.777 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:12.777 "is_configured": true, 00:31:12.777 "data_offset": 2048, 00:31:12.777 "data_size": 63488 00:31:12.777 }, 00:31:12.777 { 00:31:12.777 "name": "BaseBdev4", 00:31:12.777 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:12.777 "is_configured": true, 00:31:12.777 "data_offset": 2048, 00:31:12.777 "data_size": 63488 00:31:12.777 } 00:31:12.777 ] 00:31:12.777 }' 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:12.777 13:42:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:13.372 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:13.372 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:13.372 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:13.372 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:13.372 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:13.372 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:13.372 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:13.372 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.372 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:13.372 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.372 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:13.372 "name": "raid_bdev1", 00:31:13.372 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:13.372 "strip_size_kb": 0, 00:31:13.372 "state": "online", 00:31:13.372 "raid_level": "raid1", 00:31:13.372 "superblock": true, 00:31:13.372 "num_base_bdevs": 4, 00:31:13.372 "num_base_bdevs_discovered": 2, 00:31:13.372 "num_base_bdevs_operational": 2, 00:31:13.372 "base_bdevs_list": [ 00:31:13.372 { 00:31:13.372 "name": null, 00:31:13.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.372 "is_configured": false, 00:31:13.372 "data_offset": 0, 00:31:13.372 "data_size": 63488 00:31:13.372 }, 00:31:13.372 { 00:31:13.372 "name": null, 00:31:13.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.372 "is_configured": false, 00:31:13.373 "data_offset": 2048, 00:31:13.373 "data_size": 63488 00:31:13.373 }, 00:31:13.373 { 00:31:13.373 "name": "BaseBdev3", 00:31:13.373 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:13.373 "is_configured": true, 00:31:13.373 "data_offset": 2048, 00:31:13.373 "data_size": 63488 00:31:13.373 }, 00:31:13.373 { 00:31:13.373 "name": "BaseBdev4", 00:31:13.373 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:13.373 "is_configured": true, 00:31:13.373 "data_offset": 2048, 00:31:13.373 "data_size": 63488 00:31:13.373 } 00:31:13.373 ] 00:31:13.373 }' 00:31:13.373 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:13.373 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:13.373 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:13.631 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:13.631 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:31:13.631 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.631 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:13.631 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.631 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:13.631 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.631 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:13.631 [2024-10-28 13:42:27.546498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:13.631 [2024-10-28 13:42:27.546703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:13.631 [2024-10-28 13:42:27.546751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:31:13.631 [2024-10-28 13:42:27.546767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:13.631 [2024-10-28 13:42:27.547323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:13.631 [2024-10-28 13:42:27.547349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:13.631 [2024-10-28 13:42:27.547471] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:13.631 [2024-10-28 13:42:27.547493] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:13.631 [2024-10-28 13:42:27.547507] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:13.631 [2024-10-28 13:42:27.547521] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:31:13.631 BaseBdev1 00:31:13.631 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.631 13:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:14.568 "name": "raid_bdev1", 00:31:14.568 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:14.568 "strip_size_kb": 0, 00:31:14.568 "state": "online", 00:31:14.568 "raid_level": "raid1", 00:31:14.568 "superblock": true, 00:31:14.568 "num_base_bdevs": 4, 00:31:14.568 "num_base_bdevs_discovered": 2, 00:31:14.568 "num_base_bdevs_operational": 2, 00:31:14.568 "base_bdevs_list": [ 00:31:14.568 { 00:31:14.568 "name": null, 00:31:14.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:14.568 "is_configured": false, 00:31:14.568 "data_offset": 0, 00:31:14.568 "data_size": 63488 00:31:14.568 }, 00:31:14.568 { 00:31:14.568 "name": null, 00:31:14.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:14.568 "is_configured": false, 00:31:14.568 "data_offset": 2048, 00:31:14.568 "data_size": 63488 00:31:14.568 }, 00:31:14.568 { 00:31:14.568 "name": "BaseBdev3", 00:31:14.568 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:14.568 "is_configured": true, 00:31:14.568 "data_offset": 2048, 00:31:14.568 "data_size": 63488 00:31:14.568 }, 00:31:14.568 { 00:31:14.568 "name": "BaseBdev4", 00:31:14.568 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:14.568 "is_configured": true, 00:31:14.568 "data_offset": 2048, 00:31:14.568 "data_size": 63488 00:31:14.568 } 00:31:14.568 ] 00:31:14.568 }' 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:14.568 13:42:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:15.136 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:15.136 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:15.136 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:15.136 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:15.136 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:15.136 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:15.136 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.136 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.136 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:15.136 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.136 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:15.136 "name": "raid_bdev1", 00:31:15.136 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:15.136 "strip_size_kb": 0, 00:31:15.136 "state": "online", 00:31:15.136 "raid_level": "raid1", 00:31:15.136 "superblock": true, 00:31:15.136 "num_base_bdevs": 4, 00:31:15.136 "num_base_bdevs_discovered": 2, 00:31:15.136 "num_base_bdevs_operational": 2, 00:31:15.136 "base_bdevs_list": [ 00:31:15.136 { 00:31:15.136 "name": null, 00:31:15.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.136 "is_configured": false, 00:31:15.136 "data_offset": 0, 00:31:15.136 "data_size": 63488 00:31:15.136 }, 00:31:15.136 { 00:31:15.136 "name": null, 00:31:15.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.136 "is_configured": false, 00:31:15.136 "data_offset": 2048, 00:31:15.136 "data_size": 63488 00:31:15.136 }, 00:31:15.136 { 00:31:15.136 "name": "BaseBdev3", 00:31:15.136 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:15.136 "is_configured": true, 00:31:15.137 "data_offset": 2048, 00:31:15.137 "data_size": 63488 00:31:15.137 }, 00:31:15.137 { 00:31:15.137 "name": "BaseBdev4", 00:31:15.137 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:15.137 "is_configured": true, 00:31:15.137 "data_offset": 2048, 00:31:15.137 "data_size": 63488 00:31:15.137 } 00:31:15.137 ] 00:31:15.137 }' 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:15.137 [2024-10-28 13:42:29.247243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:15.137 [2024-10-28 13:42:29.247488] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:15.137 [2024-10-28 13:42:29.247513] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:15.137 request: 00:31:15.137 { 00:31:15.137 "base_bdev": "BaseBdev1", 00:31:15.137 "raid_bdev": "raid_bdev1", 00:31:15.137 "method": "bdev_raid_add_base_bdev", 00:31:15.137 "req_id": 1 00:31:15.137 } 00:31:15.137 Got JSON-RPC error response 00:31:15.137 response: 00:31:15.137 { 00:31:15.137 "code": -22, 00:31:15.137 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:15.137 } 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:15.137 13:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.515 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:16.515 "name": "raid_bdev1", 00:31:16.515 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:16.515 "strip_size_kb": 0, 00:31:16.515 "state": "online", 00:31:16.515 "raid_level": "raid1", 00:31:16.515 "superblock": true, 00:31:16.515 "num_base_bdevs": 4, 00:31:16.515 "num_base_bdevs_discovered": 2, 00:31:16.515 "num_base_bdevs_operational": 2, 00:31:16.515 "base_bdevs_list": [ 00:31:16.515 { 00:31:16.515 "name": null, 00:31:16.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.515 "is_configured": false, 00:31:16.515 "data_offset": 0, 00:31:16.515 "data_size": 63488 00:31:16.515 }, 00:31:16.515 { 00:31:16.515 "name": null, 00:31:16.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.515 "is_configured": false, 00:31:16.515 "data_offset": 2048, 00:31:16.515 "data_size": 63488 00:31:16.515 }, 00:31:16.515 { 00:31:16.515 "name": "BaseBdev3", 00:31:16.515 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:16.515 "is_configured": true, 00:31:16.515 "data_offset": 2048, 00:31:16.515 "data_size": 63488 00:31:16.515 }, 00:31:16.515 { 00:31:16.515 "name": "BaseBdev4", 00:31:16.515 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:16.516 "is_configured": true, 00:31:16.516 "data_offset": 2048, 00:31:16.516 "data_size": 63488 00:31:16.516 } 00:31:16.516 ] 00:31:16.516 }' 00:31:16.516 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:16.516 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:16.774 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:16.774 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:16.774 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:16.774 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:16.774 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:16.774 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.774 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.774 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.774 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:16.774 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.774 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:16.774 "name": "raid_bdev1", 00:31:16.774 "uuid": "1ecd3197-bd85-411e-a935-d130c0d058f7", 00:31:16.774 "strip_size_kb": 0, 00:31:16.774 "state": "online", 00:31:16.774 "raid_level": "raid1", 00:31:16.774 "superblock": true, 00:31:16.774 "num_base_bdevs": 4, 00:31:16.774 "num_base_bdevs_discovered": 2, 00:31:16.774 "num_base_bdevs_operational": 2, 00:31:16.774 "base_bdevs_list": [ 00:31:16.774 { 00:31:16.774 "name": null, 00:31:16.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.774 "is_configured": false, 00:31:16.774 "data_offset": 0, 00:31:16.774 "data_size": 63488 00:31:16.774 }, 00:31:16.774 { 00:31:16.774 "name": null, 00:31:16.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.774 "is_configured": false, 00:31:16.774 "data_offset": 2048, 00:31:16.774 "data_size": 63488 00:31:16.775 }, 00:31:16.775 { 00:31:16.775 "name": "BaseBdev3", 00:31:16.775 "uuid": "fba59e53-efc2-5170-9c5d-364f120502ae", 00:31:16.775 "is_configured": true, 00:31:16.775 "data_offset": 2048, 00:31:16.775 "data_size": 63488 00:31:16.775 }, 00:31:16.775 { 00:31:16.775 "name": "BaseBdev4", 00:31:16.775 "uuid": "81b76d65-ee29-551e-84d5-33bf7b995385", 00:31:16.775 "is_configured": true, 00:31:16.775 "data_offset": 2048, 00:31:16.775 "data_size": 63488 00:31:16.775 } 00:31:16.775 ] 00:31:16.775 }' 00:31:16.775 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:16.775 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:16.775 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:17.033 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:17.033 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 91932 00:31:17.033 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 91932 ']' 00:31:17.033 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 91932 00:31:17.033 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:31:17.033 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:17.033 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91932 00:31:17.033 killing process with pid 91932 00:31:17.033 Received shutdown signal, test time was about 19.036478 seconds 00:31:17.033 00:31:17.033 Latency(us) 00:31:17.033 [2024-10-28T13:42:31.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.033 [2024-10-28T13:42:31.193Z] =================================================================================================================== 00:31:17.033 [2024-10-28T13:42:31.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:17.033 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:17.033 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:17.033 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91932' 00:31:17.033 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 91932 00:31:17.033 [2024-10-28 13:42:30.971412] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:17.033 13:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 91932 00:31:17.033 [2024-10-28 13:42:30.971612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:17.033 [2024-10-28 13:42:30.971714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:17.033 [2024-10-28 13:42:30.971739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:17.033 [2024-10-28 13:42:31.021578] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:17.293 13:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:31:17.293 00:31:17.293 real 0m21.316s 00:31:17.293 user 0m29.527s 00:31:17.293 sys 0m2.354s 00:31:17.293 ************************************ 00:31:17.293 END TEST raid_rebuild_test_sb_io 00:31:17.293 ************************************ 00:31:17.293 13:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:17.293 13:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:17.293 13:42:31 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:31:17.293 13:42:31 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:31:17.293 13:42:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:17.293 13:42:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:17.293 13:42:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:17.293 ************************************ 00:31:17.293 START TEST raid5f_state_function_test 00:31:17.293 ************************************ 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:31:17.293 Process raid pid: 92657 00:31:17.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=92657 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92657' 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 92657 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 92657 ']' 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:17.293 13:42:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.293 [2024-10-28 13:42:31.432191] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:31:17.293 [2024-10-28 13:42:31.432619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.553 [2024-10-28 13:42:31.587701] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:17.553 [2024-10-28 13:42:31.619415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.553 [2024-10-28 13:42:31.673874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.812 [2024-10-28 13:42:31.731076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:17.812 [2024-10-28 13:42:31.731112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.394 [2024-10-28 13:42:32.446878] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:18.394 [2024-10-28 13:42:32.446953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:18.394 [2024-10-28 13:42:32.446974] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:18.394 [2024-10-28 13:42:32.446989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:18.394 [2024-10-28 13:42:32.447008] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:18.394 [2024-10-28 13:42:32.447021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:18.394 "name": "Existed_Raid", 00:31:18.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:18.394 "strip_size_kb": 64, 00:31:18.394 "state": "configuring", 00:31:18.394 "raid_level": "raid5f", 00:31:18.394 "superblock": false, 00:31:18.394 "num_base_bdevs": 3, 00:31:18.394 "num_base_bdevs_discovered": 0, 00:31:18.394 "num_base_bdevs_operational": 3, 00:31:18.394 "base_bdevs_list": [ 00:31:18.394 { 00:31:18.394 "name": "BaseBdev1", 00:31:18.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:18.394 "is_configured": false, 00:31:18.394 "data_offset": 0, 00:31:18.394 "data_size": 0 00:31:18.394 }, 00:31:18.394 { 00:31:18.394 "name": "BaseBdev2", 00:31:18.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:18.394 "is_configured": false, 00:31:18.394 "data_offset": 0, 00:31:18.394 "data_size": 0 00:31:18.394 }, 00:31:18.394 { 00:31:18.394 "name": "BaseBdev3", 00:31:18.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:18.394 "is_configured": false, 00:31:18.394 "data_offset": 0, 00:31:18.394 "data_size": 0 00:31:18.394 } 00:31:18.394 ] 00:31:18.394 }' 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:18.394 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.979 [2024-10-28 13:42:32.930870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:18.979 [2024-10-28 13:42:32.930911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.979 [2024-10-28 13:42:32.942906] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:18.979 [2024-10-28 13:42:32.942958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:18.979 [2024-10-28 13:42:32.942979] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:18.979 [2024-10-28 13:42:32.942994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:18.979 [2024-10-28 13:42:32.943007] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:18.979 [2024-10-28 13:42:32.943020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.979 [2024-10-28 13:42:32.966986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:18.979 BaseBdev1 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:18.979 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.980 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.980 [ 00:31:18.980 { 00:31:18.980 "name": "BaseBdev1", 00:31:18.980 "aliases": [ 00:31:18.980 "9038fe8e-cfb0-404e-b4f4-1c104058bfd7" 00:31:18.980 ], 00:31:18.980 "product_name": "Malloc disk", 00:31:18.980 "block_size": 512, 00:31:18.980 "num_blocks": 65536, 00:31:18.980 "uuid": "9038fe8e-cfb0-404e-b4f4-1c104058bfd7", 00:31:18.980 "assigned_rate_limits": { 00:31:18.980 "rw_ios_per_sec": 0, 00:31:18.980 "rw_mbytes_per_sec": 0, 00:31:18.980 "r_mbytes_per_sec": 0, 00:31:18.980 "w_mbytes_per_sec": 0 00:31:18.980 }, 00:31:18.980 "claimed": true, 00:31:18.980 "claim_type": "exclusive_write", 00:31:18.980 "zoned": false, 00:31:18.980 "supported_io_types": { 00:31:18.980 "read": true, 00:31:18.980 "write": true, 00:31:18.980 "unmap": true, 00:31:18.980 "flush": true, 00:31:18.980 "reset": true, 00:31:18.980 "nvme_admin": false, 00:31:18.980 "nvme_io": false, 00:31:18.980 "nvme_io_md": false, 00:31:18.980 "write_zeroes": true, 00:31:18.980 "zcopy": true, 00:31:18.980 "get_zone_info": false, 00:31:18.980 "zone_management": false, 00:31:18.980 "zone_append": false, 00:31:18.980 "compare": false, 00:31:18.980 "compare_and_write": false, 00:31:18.980 "abort": true, 00:31:18.980 "seek_hole": false, 00:31:18.980 "seek_data": false, 00:31:18.980 "copy": true, 00:31:18.980 "nvme_iov_md": false 00:31:18.980 }, 00:31:18.980 "memory_domains": [ 00:31:18.980 { 00:31:18.980 "dma_device_id": "system", 00:31:18.980 "dma_device_type": 1 00:31:18.980 }, 00:31:18.980 { 00:31:18.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:18.980 "dma_device_type": 2 00:31:18.980 } 00:31:18.980 ], 00:31:18.980 "driver_specific": {} 00:31:18.980 } 00:31:18.980 ] 00:31:18.980 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.980 13:42:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:18.980 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:18.980 13:42:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:18.980 "name": "Existed_Raid", 00:31:18.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:18.980 "strip_size_kb": 64, 00:31:18.980 "state": "configuring", 00:31:18.980 "raid_level": "raid5f", 00:31:18.980 "superblock": false, 00:31:18.980 "num_base_bdevs": 3, 00:31:18.980 "num_base_bdevs_discovered": 1, 00:31:18.980 "num_base_bdevs_operational": 3, 00:31:18.980 "base_bdevs_list": [ 00:31:18.980 { 00:31:18.980 "name": "BaseBdev1", 00:31:18.980 "uuid": "9038fe8e-cfb0-404e-b4f4-1c104058bfd7", 00:31:18.980 "is_configured": true, 00:31:18.980 "data_offset": 0, 00:31:18.980 "data_size": 65536 00:31:18.980 }, 00:31:18.980 { 00:31:18.980 "name": "BaseBdev2", 00:31:18.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:18.980 "is_configured": false, 00:31:18.980 "data_offset": 0, 00:31:18.980 "data_size": 0 00:31:18.980 }, 00:31:18.980 { 00:31:18.980 "name": "BaseBdev3", 00:31:18.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:18.980 "is_configured": false, 00:31:18.980 "data_offset": 0, 00:31:18.980 "data_size": 0 00:31:18.980 } 00:31:18.980 ] 00:31:18.980 }' 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:18.980 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:19.550 [2024-10-28 13:42:33.499185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:19.550 [2024-10-28 13:42:33.499257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:19.550 [2024-10-28 13:42:33.507203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:19.550 [2024-10-28 13:42:33.510349] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:19.550 [2024-10-28 13:42:33.510401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:19.550 [2024-10-28 13:42:33.510423] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:19.550 [2024-10-28 13:42:33.510440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:19.550 "name": "Existed_Raid", 00:31:19.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:19.550 "strip_size_kb": 64, 00:31:19.550 "state": "configuring", 00:31:19.550 "raid_level": "raid5f", 00:31:19.550 "superblock": false, 00:31:19.550 "num_base_bdevs": 3, 00:31:19.550 "num_base_bdevs_discovered": 1, 00:31:19.550 "num_base_bdevs_operational": 3, 00:31:19.550 "base_bdevs_list": [ 00:31:19.550 { 00:31:19.550 "name": "BaseBdev1", 00:31:19.550 "uuid": "9038fe8e-cfb0-404e-b4f4-1c104058bfd7", 00:31:19.550 "is_configured": true, 00:31:19.550 "data_offset": 0, 00:31:19.550 "data_size": 65536 00:31:19.550 }, 00:31:19.550 { 00:31:19.550 "name": "BaseBdev2", 00:31:19.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:19.550 "is_configured": false, 00:31:19.550 "data_offset": 0, 00:31:19.550 "data_size": 0 00:31:19.550 }, 00:31:19.550 { 00:31:19.550 "name": "BaseBdev3", 00:31:19.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:19.550 "is_configured": false, 00:31:19.550 "data_offset": 0, 00:31:19.550 "data_size": 0 00:31:19.550 } 00:31:19.550 ] 00:31:19.550 }' 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:19.550 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.118 13:42:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:20.118 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.118 13:42:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.118 BaseBdev2 00:31:20.118 [2024-10-28 13:42:34.012678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:20.118 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.118 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:20.118 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:20.118 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:20.118 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:20.118 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:20.118 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:20.118 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:20.118 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.118 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.118 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.118 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:20.118 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.118 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.118 [ 00:31:20.118 { 00:31:20.118 "name": "BaseBdev2", 00:31:20.118 "aliases": [ 00:31:20.118 "9657ad60-9a86-4c7e-8e04-83a85d4caa37" 00:31:20.118 ], 00:31:20.118 "product_name": "Malloc disk", 00:31:20.118 "block_size": 512, 00:31:20.118 "num_blocks": 65536, 00:31:20.118 "uuid": "9657ad60-9a86-4c7e-8e04-83a85d4caa37", 00:31:20.118 "assigned_rate_limits": { 00:31:20.118 "rw_ios_per_sec": 0, 00:31:20.118 "rw_mbytes_per_sec": 0, 00:31:20.118 "r_mbytes_per_sec": 0, 00:31:20.118 "w_mbytes_per_sec": 0 00:31:20.118 }, 00:31:20.118 "claimed": true, 00:31:20.118 "claim_type": "exclusive_write", 00:31:20.119 "zoned": false, 00:31:20.119 "supported_io_types": { 00:31:20.119 "read": true, 00:31:20.119 "write": true, 00:31:20.119 "unmap": true, 00:31:20.119 "flush": true, 00:31:20.119 "reset": true, 00:31:20.119 "nvme_admin": false, 00:31:20.119 "nvme_io": false, 00:31:20.119 "nvme_io_md": false, 00:31:20.119 "write_zeroes": true, 00:31:20.119 "zcopy": true, 00:31:20.119 "get_zone_info": false, 00:31:20.119 "zone_management": false, 00:31:20.119 "zone_append": false, 00:31:20.119 "compare": false, 00:31:20.119 "compare_and_write": false, 00:31:20.119 "abort": true, 00:31:20.119 "seek_hole": false, 00:31:20.119 "seek_data": false, 00:31:20.119 "copy": true, 00:31:20.119 "nvme_iov_md": false 00:31:20.119 }, 00:31:20.119 "memory_domains": [ 00:31:20.119 { 00:31:20.119 "dma_device_id": "system", 00:31:20.119 "dma_device_type": 1 00:31:20.119 }, 00:31:20.119 { 00:31:20.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.119 "dma_device_type": 2 00:31:20.119 } 00:31:20.119 ], 00:31:20.119 "driver_specific": {} 00:31:20.119 } 00:31:20.119 ] 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:20.119 "name": "Existed_Raid", 00:31:20.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:20.119 "strip_size_kb": 64, 00:31:20.119 "state": "configuring", 00:31:20.119 "raid_level": "raid5f", 00:31:20.119 "superblock": false, 00:31:20.119 "num_base_bdevs": 3, 00:31:20.119 "num_base_bdevs_discovered": 2, 00:31:20.119 "num_base_bdevs_operational": 3, 00:31:20.119 "base_bdevs_list": [ 00:31:20.119 { 00:31:20.119 "name": "BaseBdev1", 00:31:20.119 "uuid": "9038fe8e-cfb0-404e-b4f4-1c104058bfd7", 00:31:20.119 "is_configured": true, 00:31:20.119 "data_offset": 0, 00:31:20.119 "data_size": 65536 00:31:20.119 }, 00:31:20.119 { 00:31:20.119 "name": "BaseBdev2", 00:31:20.119 "uuid": "9657ad60-9a86-4c7e-8e04-83a85d4caa37", 00:31:20.119 "is_configured": true, 00:31:20.119 "data_offset": 0, 00:31:20.119 "data_size": 65536 00:31:20.119 }, 00:31:20.119 { 00:31:20.119 "name": "BaseBdev3", 00:31:20.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:20.119 "is_configured": false, 00:31:20.119 "data_offset": 0, 00:31:20.119 "data_size": 0 00:31:20.119 } 00:31:20.119 ] 00:31:20.119 }' 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:20.119 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.378 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:20.378 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.378 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.638 [2024-10-28 13:42:34.556074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:20.638 [2024-10-28 13:42:34.556422] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:31:20.638 [2024-10-28 13:42:34.556449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:20.638 [2024-10-28 13:42:34.556817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:31:20.638 [2024-10-28 13:42:34.557411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:31:20.638 [2024-10-28 13:42:34.557435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:31:20.638 BaseBdev3 00:31:20.638 [2024-10-28 13:42:34.557697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.638 [ 00:31:20.638 { 00:31:20.638 "name": "BaseBdev3", 00:31:20.638 "aliases": [ 00:31:20.638 "f7ba4fac-8e49-488e-90a8-3d5524bce500" 00:31:20.638 ], 00:31:20.638 "product_name": "Malloc disk", 00:31:20.638 "block_size": 512, 00:31:20.638 "num_blocks": 65536, 00:31:20.638 "uuid": "f7ba4fac-8e49-488e-90a8-3d5524bce500", 00:31:20.638 "assigned_rate_limits": { 00:31:20.638 "rw_ios_per_sec": 0, 00:31:20.638 "rw_mbytes_per_sec": 0, 00:31:20.638 "r_mbytes_per_sec": 0, 00:31:20.638 "w_mbytes_per_sec": 0 00:31:20.638 }, 00:31:20.638 "claimed": true, 00:31:20.638 "claim_type": "exclusive_write", 00:31:20.638 "zoned": false, 00:31:20.638 "supported_io_types": { 00:31:20.638 "read": true, 00:31:20.638 "write": true, 00:31:20.638 "unmap": true, 00:31:20.638 "flush": true, 00:31:20.638 "reset": true, 00:31:20.638 "nvme_admin": false, 00:31:20.638 "nvme_io": false, 00:31:20.638 "nvme_io_md": false, 00:31:20.638 "write_zeroes": true, 00:31:20.638 "zcopy": true, 00:31:20.638 "get_zone_info": false, 00:31:20.638 "zone_management": false, 00:31:20.638 "zone_append": false, 00:31:20.638 "compare": false, 00:31:20.638 "compare_and_write": false, 00:31:20.638 "abort": true, 00:31:20.638 "seek_hole": false, 00:31:20.638 "seek_data": false, 00:31:20.638 "copy": true, 00:31:20.638 "nvme_iov_md": false 00:31:20.638 }, 00:31:20.638 "memory_domains": [ 00:31:20.638 { 00:31:20.638 "dma_device_id": "system", 00:31:20.638 "dma_device_type": 1 00:31:20.638 }, 00:31:20.638 { 00:31:20.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.638 "dma_device_type": 2 00:31:20.638 } 00:31:20.638 ], 00:31:20.638 "driver_specific": {} 00:31:20.638 } 00:31:20.638 ] 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:20.638 "name": "Existed_Raid", 00:31:20.638 "uuid": "c762a2fd-86a0-414a-8b48-8c3248d2a830", 00:31:20.638 "strip_size_kb": 64, 00:31:20.638 "state": "online", 00:31:20.638 "raid_level": "raid5f", 00:31:20.638 "superblock": false, 00:31:20.638 "num_base_bdevs": 3, 00:31:20.638 "num_base_bdevs_discovered": 3, 00:31:20.638 "num_base_bdevs_operational": 3, 00:31:20.638 "base_bdevs_list": [ 00:31:20.638 { 00:31:20.638 "name": "BaseBdev1", 00:31:20.638 "uuid": "9038fe8e-cfb0-404e-b4f4-1c104058bfd7", 00:31:20.638 "is_configured": true, 00:31:20.638 "data_offset": 0, 00:31:20.638 "data_size": 65536 00:31:20.638 }, 00:31:20.638 { 00:31:20.638 "name": "BaseBdev2", 00:31:20.638 "uuid": "9657ad60-9a86-4c7e-8e04-83a85d4caa37", 00:31:20.638 "is_configured": true, 00:31:20.638 "data_offset": 0, 00:31:20.638 "data_size": 65536 00:31:20.638 }, 00:31:20.638 { 00:31:20.638 "name": "BaseBdev3", 00:31:20.638 "uuid": "f7ba4fac-8e49-488e-90a8-3d5524bce500", 00:31:20.638 "is_configured": true, 00:31:20.638 "data_offset": 0, 00:31:20.638 "data_size": 65536 00:31:20.638 } 00:31:20.638 ] 00:31:20.638 }' 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:20.638 13:42:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.207 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:21.207 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:21.207 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.208 [2024-10-28 13:42:35.104527] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:21.208 "name": "Existed_Raid", 00:31:21.208 "aliases": [ 00:31:21.208 "c762a2fd-86a0-414a-8b48-8c3248d2a830" 00:31:21.208 ], 00:31:21.208 "product_name": "Raid Volume", 00:31:21.208 "block_size": 512, 00:31:21.208 "num_blocks": 131072, 00:31:21.208 "uuid": "c762a2fd-86a0-414a-8b48-8c3248d2a830", 00:31:21.208 "assigned_rate_limits": { 00:31:21.208 "rw_ios_per_sec": 0, 00:31:21.208 "rw_mbytes_per_sec": 0, 00:31:21.208 "r_mbytes_per_sec": 0, 00:31:21.208 "w_mbytes_per_sec": 0 00:31:21.208 }, 00:31:21.208 "claimed": false, 00:31:21.208 "zoned": false, 00:31:21.208 "supported_io_types": { 00:31:21.208 "read": true, 00:31:21.208 "write": true, 00:31:21.208 "unmap": false, 00:31:21.208 "flush": false, 00:31:21.208 "reset": true, 00:31:21.208 "nvme_admin": false, 00:31:21.208 "nvme_io": false, 00:31:21.208 "nvme_io_md": false, 00:31:21.208 "write_zeroes": true, 00:31:21.208 "zcopy": false, 00:31:21.208 "get_zone_info": false, 00:31:21.208 "zone_management": false, 00:31:21.208 "zone_append": false, 00:31:21.208 "compare": false, 00:31:21.208 "compare_and_write": false, 00:31:21.208 "abort": false, 00:31:21.208 "seek_hole": false, 00:31:21.208 "seek_data": false, 00:31:21.208 "copy": false, 00:31:21.208 "nvme_iov_md": false 00:31:21.208 }, 00:31:21.208 "driver_specific": { 00:31:21.208 "raid": { 00:31:21.208 "uuid": "c762a2fd-86a0-414a-8b48-8c3248d2a830", 00:31:21.208 "strip_size_kb": 64, 00:31:21.208 "state": "online", 00:31:21.208 "raid_level": "raid5f", 00:31:21.208 "superblock": false, 00:31:21.208 "num_base_bdevs": 3, 00:31:21.208 "num_base_bdevs_discovered": 3, 00:31:21.208 "num_base_bdevs_operational": 3, 00:31:21.208 "base_bdevs_list": [ 00:31:21.208 { 00:31:21.208 "name": "BaseBdev1", 00:31:21.208 "uuid": "9038fe8e-cfb0-404e-b4f4-1c104058bfd7", 00:31:21.208 "is_configured": true, 00:31:21.208 "data_offset": 0, 00:31:21.208 "data_size": 65536 00:31:21.208 }, 00:31:21.208 { 00:31:21.208 "name": "BaseBdev2", 00:31:21.208 "uuid": "9657ad60-9a86-4c7e-8e04-83a85d4caa37", 00:31:21.208 "is_configured": true, 00:31:21.208 "data_offset": 0, 00:31:21.208 "data_size": 65536 00:31:21.208 }, 00:31:21.208 { 00:31:21.208 "name": "BaseBdev3", 00:31:21.208 "uuid": "f7ba4fac-8e49-488e-90a8-3d5524bce500", 00:31:21.208 "is_configured": true, 00:31:21.208 "data_offset": 0, 00:31:21.208 "data_size": 65536 00:31:21.208 } 00:31:21.208 ] 00:31:21.208 } 00:31:21.208 } 00:31:21.208 }' 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:21.208 BaseBdev2 00:31:21.208 BaseBdev3' 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.208 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.468 [2024-10-28 13:42:35.412508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:21.468 "name": "Existed_Raid", 00:31:21.468 "uuid": "c762a2fd-86a0-414a-8b48-8c3248d2a830", 00:31:21.468 "strip_size_kb": 64, 00:31:21.468 "state": "online", 00:31:21.468 "raid_level": "raid5f", 00:31:21.468 "superblock": false, 00:31:21.468 "num_base_bdevs": 3, 00:31:21.468 "num_base_bdevs_discovered": 2, 00:31:21.468 "num_base_bdevs_operational": 2, 00:31:21.468 "base_bdevs_list": [ 00:31:21.468 { 00:31:21.468 "name": null, 00:31:21.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:21.468 "is_configured": false, 00:31:21.468 "data_offset": 0, 00:31:21.468 "data_size": 65536 00:31:21.468 }, 00:31:21.468 { 00:31:21.468 "name": "BaseBdev2", 00:31:21.468 "uuid": "9657ad60-9a86-4c7e-8e04-83a85d4caa37", 00:31:21.468 "is_configured": true, 00:31:21.468 "data_offset": 0, 00:31:21.468 "data_size": 65536 00:31:21.468 }, 00:31:21.468 { 00:31:21.468 "name": "BaseBdev3", 00:31:21.468 "uuid": "f7ba4fac-8e49-488e-90a8-3d5524bce500", 00:31:21.468 "is_configured": true, 00:31:21.468 "data_offset": 0, 00:31:21.468 "data_size": 65536 00:31:21.468 } 00:31:21.468 ] 00:31:21.468 }' 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:21.468 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.037 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:22.037 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:22.037 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:22.037 13:42:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:22.037 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.037 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.037 13:42:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.037 [2024-10-28 13:42:36.010377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:22.037 [2024-10-28 13:42:36.010693] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:22.037 [2024-10-28 13:42:36.024186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.037 [2024-10-28 13:42:36.084272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:22.037 [2024-10-28 13:42:36.084486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.037 BaseBdev2 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.037 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.037 [ 00:31:22.037 { 00:31:22.037 "name": "BaseBdev2", 00:31:22.037 "aliases": [ 00:31:22.037 "18d13057-86ad-4341-8337-996f4168fea1" 00:31:22.037 ], 00:31:22.037 "product_name": "Malloc disk", 00:31:22.037 "block_size": 512, 00:31:22.037 "num_blocks": 65536, 00:31:22.037 "uuid": "18d13057-86ad-4341-8337-996f4168fea1", 00:31:22.037 "assigned_rate_limits": { 00:31:22.037 "rw_ios_per_sec": 0, 00:31:22.037 "rw_mbytes_per_sec": 0, 00:31:22.037 "r_mbytes_per_sec": 0, 00:31:22.037 "w_mbytes_per_sec": 0 00:31:22.037 }, 00:31:22.037 "claimed": false, 00:31:22.037 "zoned": false, 00:31:22.037 "supported_io_types": { 00:31:22.037 "read": true, 00:31:22.037 "write": true, 00:31:22.037 "unmap": true, 00:31:22.037 "flush": true, 00:31:22.037 "reset": true, 00:31:22.037 "nvme_admin": false, 00:31:22.037 "nvme_io": false, 00:31:22.037 "nvme_io_md": false, 00:31:22.297 "write_zeroes": true, 00:31:22.297 "zcopy": true, 00:31:22.297 "get_zone_info": false, 00:31:22.297 "zone_management": false, 00:31:22.297 "zone_append": false, 00:31:22.297 "compare": false, 00:31:22.297 "compare_and_write": false, 00:31:22.297 "abort": true, 00:31:22.297 "seek_hole": false, 00:31:22.297 "seek_data": false, 00:31:22.297 "copy": true, 00:31:22.297 "nvme_iov_md": false 00:31:22.297 }, 00:31:22.297 "memory_domains": [ 00:31:22.297 { 00:31:22.297 "dma_device_id": "system", 00:31:22.297 "dma_device_type": 1 00:31:22.297 }, 00:31:22.297 { 00:31:22.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:22.297 "dma_device_type": 2 00:31:22.297 } 00:31:22.297 ], 00:31:22.297 "driver_specific": {} 00:31:22.297 } 00:31:22.297 ] 00:31:22.297 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.297 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:22.297 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:22.297 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:22.297 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:22.297 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.297 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.297 BaseBdev3 00:31:22.297 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.298 [ 00:31:22.298 { 00:31:22.298 "name": "BaseBdev3", 00:31:22.298 "aliases": [ 00:31:22.298 "4dabe5bc-8c77-47b1-b37e-67e8b3bef99c" 00:31:22.298 ], 00:31:22.298 "product_name": "Malloc disk", 00:31:22.298 "block_size": 512, 00:31:22.298 "num_blocks": 65536, 00:31:22.298 "uuid": "4dabe5bc-8c77-47b1-b37e-67e8b3bef99c", 00:31:22.298 "assigned_rate_limits": { 00:31:22.298 "rw_ios_per_sec": 0, 00:31:22.298 "rw_mbytes_per_sec": 0, 00:31:22.298 "r_mbytes_per_sec": 0, 00:31:22.298 "w_mbytes_per_sec": 0 00:31:22.298 }, 00:31:22.298 "claimed": false, 00:31:22.298 "zoned": false, 00:31:22.298 "supported_io_types": { 00:31:22.298 "read": true, 00:31:22.298 "write": true, 00:31:22.298 "unmap": true, 00:31:22.298 "flush": true, 00:31:22.298 "reset": true, 00:31:22.298 "nvme_admin": false, 00:31:22.298 "nvme_io": false, 00:31:22.298 "nvme_io_md": false, 00:31:22.298 "write_zeroes": true, 00:31:22.298 "zcopy": true, 00:31:22.298 "get_zone_info": false, 00:31:22.298 "zone_management": false, 00:31:22.298 "zone_append": false, 00:31:22.298 "compare": false, 00:31:22.298 "compare_and_write": false, 00:31:22.298 "abort": true, 00:31:22.298 "seek_hole": false, 00:31:22.298 "seek_data": false, 00:31:22.298 "copy": true, 00:31:22.298 "nvme_iov_md": false 00:31:22.298 }, 00:31:22.298 "memory_domains": [ 00:31:22.298 { 00:31:22.298 "dma_device_id": "system", 00:31:22.298 "dma_device_type": 1 00:31:22.298 }, 00:31:22.298 { 00:31:22.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:22.298 "dma_device_type": 2 00:31:22.298 } 00:31:22.298 ], 00:31:22.298 "driver_specific": {} 00:31:22.298 } 00:31:22.298 ] 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.298 [2024-10-28 13:42:36.252753] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:22.298 [2024-10-28 13:42:36.252949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:22.298 [2024-10-28 13:42:36.253106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:22.298 [2024-10-28 13:42:36.255748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:22.298 "name": "Existed_Raid", 00:31:22.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.298 "strip_size_kb": 64, 00:31:22.298 "state": "configuring", 00:31:22.298 "raid_level": "raid5f", 00:31:22.298 "superblock": false, 00:31:22.298 "num_base_bdevs": 3, 00:31:22.298 "num_base_bdevs_discovered": 2, 00:31:22.298 "num_base_bdevs_operational": 3, 00:31:22.298 "base_bdevs_list": [ 00:31:22.298 { 00:31:22.298 "name": "BaseBdev1", 00:31:22.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.298 "is_configured": false, 00:31:22.298 "data_offset": 0, 00:31:22.298 "data_size": 0 00:31:22.298 }, 00:31:22.298 { 00:31:22.298 "name": "BaseBdev2", 00:31:22.298 "uuid": "18d13057-86ad-4341-8337-996f4168fea1", 00:31:22.298 "is_configured": true, 00:31:22.298 "data_offset": 0, 00:31:22.298 "data_size": 65536 00:31:22.298 }, 00:31:22.298 { 00:31:22.298 "name": "BaseBdev3", 00:31:22.298 "uuid": "4dabe5bc-8c77-47b1-b37e-67e8b3bef99c", 00:31:22.298 "is_configured": true, 00:31:22.298 "data_offset": 0, 00:31:22.298 "data_size": 65536 00:31:22.298 } 00:31:22.298 ] 00:31:22.298 }' 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:22.298 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.865 [2024-10-28 13:42:36.784883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:22.865 "name": "Existed_Raid", 00:31:22.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.865 "strip_size_kb": 64, 00:31:22.865 "state": "configuring", 00:31:22.865 "raid_level": "raid5f", 00:31:22.865 "superblock": false, 00:31:22.865 "num_base_bdevs": 3, 00:31:22.865 "num_base_bdevs_discovered": 1, 00:31:22.865 "num_base_bdevs_operational": 3, 00:31:22.865 "base_bdevs_list": [ 00:31:22.865 { 00:31:22.865 "name": "BaseBdev1", 00:31:22.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.865 "is_configured": false, 00:31:22.865 "data_offset": 0, 00:31:22.865 "data_size": 0 00:31:22.865 }, 00:31:22.865 { 00:31:22.865 "name": null, 00:31:22.865 "uuid": "18d13057-86ad-4341-8337-996f4168fea1", 00:31:22.865 "is_configured": false, 00:31:22.865 "data_offset": 0, 00:31:22.865 "data_size": 65536 00:31:22.865 }, 00:31:22.865 { 00:31:22.865 "name": "BaseBdev3", 00:31:22.865 "uuid": "4dabe5bc-8c77-47b1-b37e-67e8b3bef99c", 00:31:22.865 "is_configured": true, 00:31:22.865 "data_offset": 0, 00:31:22.865 "data_size": 65536 00:31:22.865 } 00:31:22.865 ] 00:31:22.865 }' 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:22.865 13:42:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.434 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.435 [2024-10-28 13:42:37.358107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:23.435 BaseBdev1 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.435 [ 00:31:23.435 { 00:31:23.435 "name": "BaseBdev1", 00:31:23.435 "aliases": [ 00:31:23.435 "2a069743-f591-499a-a3ab-0b3f8b7ff933" 00:31:23.435 ], 00:31:23.435 "product_name": "Malloc disk", 00:31:23.435 "block_size": 512, 00:31:23.435 "num_blocks": 65536, 00:31:23.435 "uuid": "2a069743-f591-499a-a3ab-0b3f8b7ff933", 00:31:23.435 "assigned_rate_limits": { 00:31:23.435 "rw_ios_per_sec": 0, 00:31:23.435 "rw_mbytes_per_sec": 0, 00:31:23.435 "r_mbytes_per_sec": 0, 00:31:23.435 "w_mbytes_per_sec": 0 00:31:23.435 }, 00:31:23.435 "claimed": true, 00:31:23.435 "claim_type": "exclusive_write", 00:31:23.435 "zoned": false, 00:31:23.435 "supported_io_types": { 00:31:23.435 "read": true, 00:31:23.435 "write": true, 00:31:23.435 "unmap": true, 00:31:23.435 "flush": true, 00:31:23.435 "reset": true, 00:31:23.435 "nvme_admin": false, 00:31:23.435 "nvme_io": false, 00:31:23.435 "nvme_io_md": false, 00:31:23.435 "write_zeroes": true, 00:31:23.435 "zcopy": true, 00:31:23.435 "get_zone_info": false, 00:31:23.435 "zone_management": false, 00:31:23.435 "zone_append": false, 00:31:23.435 "compare": false, 00:31:23.435 "compare_and_write": false, 00:31:23.435 "abort": true, 00:31:23.435 "seek_hole": false, 00:31:23.435 "seek_data": false, 00:31:23.435 "copy": true, 00:31:23.435 "nvme_iov_md": false 00:31:23.435 }, 00:31:23.435 "memory_domains": [ 00:31:23.435 { 00:31:23.435 "dma_device_id": "system", 00:31:23.435 "dma_device_type": 1 00:31:23.435 }, 00:31:23.435 { 00:31:23.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:23.435 "dma_device_type": 2 00:31:23.435 } 00:31:23.435 ], 00:31:23.435 "driver_specific": {} 00:31:23.435 } 00:31:23.435 ] 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:23.435 "name": "Existed_Raid", 00:31:23.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.435 "strip_size_kb": 64, 00:31:23.435 "state": "configuring", 00:31:23.435 "raid_level": "raid5f", 00:31:23.435 "superblock": false, 00:31:23.435 "num_base_bdevs": 3, 00:31:23.435 "num_base_bdevs_discovered": 2, 00:31:23.435 "num_base_bdevs_operational": 3, 00:31:23.435 "base_bdevs_list": [ 00:31:23.435 { 00:31:23.435 "name": "BaseBdev1", 00:31:23.435 "uuid": "2a069743-f591-499a-a3ab-0b3f8b7ff933", 00:31:23.435 "is_configured": true, 00:31:23.435 "data_offset": 0, 00:31:23.435 "data_size": 65536 00:31:23.435 }, 00:31:23.435 { 00:31:23.435 "name": null, 00:31:23.435 "uuid": "18d13057-86ad-4341-8337-996f4168fea1", 00:31:23.435 "is_configured": false, 00:31:23.435 "data_offset": 0, 00:31:23.435 "data_size": 65536 00:31:23.435 }, 00:31:23.435 { 00:31:23.435 "name": "BaseBdev3", 00:31:23.435 "uuid": "4dabe5bc-8c77-47b1-b37e-67e8b3bef99c", 00:31:23.435 "is_configured": true, 00:31:23.435 "data_offset": 0, 00:31:23.435 "data_size": 65536 00:31:23.435 } 00:31:23.435 ] 00:31:23.435 }' 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:23.435 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.001 [2024-10-28 13:42:37.982414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:24.001 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:24.002 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:24.002 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.002 13:42:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:24.002 13:42:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.002 13:42:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.002 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:24.002 "name": "Existed_Raid", 00:31:24.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.002 "strip_size_kb": 64, 00:31:24.002 "state": "configuring", 00:31:24.002 "raid_level": "raid5f", 00:31:24.002 "superblock": false, 00:31:24.002 "num_base_bdevs": 3, 00:31:24.002 "num_base_bdevs_discovered": 1, 00:31:24.002 "num_base_bdevs_operational": 3, 00:31:24.002 "base_bdevs_list": [ 00:31:24.002 { 00:31:24.002 "name": "BaseBdev1", 00:31:24.002 "uuid": "2a069743-f591-499a-a3ab-0b3f8b7ff933", 00:31:24.002 "is_configured": true, 00:31:24.002 "data_offset": 0, 00:31:24.002 "data_size": 65536 00:31:24.002 }, 00:31:24.002 { 00:31:24.002 "name": null, 00:31:24.002 "uuid": "18d13057-86ad-4341-8337-996f4168fea1", 00:31:24.002 "is_configured": false, 00:31:24.002 "data_offset": 0, 00:31:24.002 "data_size": 65536 00:31:24.002 }, 00:31:24.002 { 00:31:24.002 "name": null, 00:31:24.002 "uuid": "4dabe5bc-8c77-47b1-b37e-67e8b3bef99c", 00:31:24.002 "is_configured": false, 00:31:24.002 "data_offset": 0, 00:31:24.002 "data_size": 65536 00:31:24.002 } 00:31:24.002 ] 00:31:24.002 }' 00:31:24.002 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:24.002 13:42:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.569 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.570 [2024-10-28 13:42:38.550586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:24.570 "name": "Existed_Raid", 00:31:24.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.570 "strip_size_kb": 64, 00:31:24.570 "state": "configuring", 00:31:24.570 "raid_level": "raid5f", 00:31:24.570 "superblock": false, 00:31:24.570 "num_base_bdevs": 3, 00:31:24.570 "num_base_bdevs_discovered": 2, 00:31:24.570 "num_base_bdevs_operational": 3, 00:31:24.570 "base_bdevs_list": [ 00:31:24.570 { 00:31:24.570 "name": "BaseBdev1", 00:31:24.570 "uuid": "2a069743-f591-499a-a3ab-0b3f8b7ff933", 00:31:24.570 "is_configured": true, 00:31:24.570 "data_offset": 0, 00:31:24.570 "data_size": 65536 00:31:24.570 }, 00:31:24.570 { 00:31:24.570 "name": null, 00:31:24.570 "uuid": "18d13057-86ad-4341-8337-996f4168fea1", 00:31:24.570 "is_configured": false, 00:31:24.570 "data_offset": 0, 00:31:24.570 "data_size": 65536 00:31:24.570 }, 00:31:24.570 { 00:31:24.570 "name": "BaseBdev3", 00:31:24.570 "uuid": "4dabe5bc-8c77-47b1-b37e-67e8b3bef99c", 00:31:24.570 "is_configured": true, 00:31:24.570 "data_offset": 0, 00:31:24.570 "data_size": 65536 00:31:24.570 } 00:31:24.570 ] 00:31:24.570 }' 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:24.570 13:42:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.137 [2024-10-28 13:42:39.122760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:25.137 "name": "Existed_Raid", 00:31:25.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.137 "strip_size_kb": 64, 00:31:25.137 "state": "configuring", 00:31:25.137 "raid_level": "raid5f", 00:31:25.137 "superblock": false, 00:31:25.137 "num_base_bdevs": 3, 00:31:25.137 "num_base_bdevs_discovered": 1, 00:31:25.137 "num_base_bdevs_operational": 3, 00:31:25.137 "base_bdevs_list": [ 00:31:25.137 { 00:31:25.137 "name": null, 00:31:25.137 "uuid": "2a069743-f591-499a-a3ab-0b3f8b7ff933", 00:31:25.137 "is_configured": false, 00:31:25.137 "data_offset": 0, 00:31:25.137 "data_size": 65536 00:31:25.137 }, 00:31:25.137 { 00:31:25.137 "name": null, 00:31:25.137 "uuid": "18d13057-86ad-4341-8337-996f4168fea1", 00:31:25.137 "is_configured": false, 00:31:25.137 "data_offset": 0, 00:31:25.137 "data_size": 65536 00:31:25.137 }, 00:31:25.137 { 00:31:25.137 "name": "BaseBdev3", 00:31:25.137 "uuid": "4dabe5bc-8c77-47b1-b37e-67e8b3bef99c", 00:31:25.137 "is_configured": true, 00:31:25.137 "data_offset": 0, 00:31:25.137 "data_size": 65536 00:31:25.137 } 00:31:25.137 ] 00:31:25.137 }' 00:31:25.137 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:25.138 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.706 [2024-10-28 13:42:39.726688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:25.706 "name": "Existed_Raid", 00:31:25.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.706 "strip_size_kb": 64, 00:31:25.706 "state": "configuring", 00:31:25.706 "raid_level": "raid5f", 00:31:25.706 "superblock": false, 00:31:25.706 "num_base_bdevs": 3, 00:31:25.706 "num_base_bdevs_discovered": 2, 00:31:25.706 "num_base_bdevs_operational": 3, 00:31:25.706 "base_bdevs_list": [ 00:31:25.706 { 00:31:25.706 "name": null, 00:31:25.706 "uuid": "2a069743-f591-499a-a3ab-0b3f8b7ff933", 00:31:25.706 "is_configured": false, 00:31:25.706 "data_offset": 0, 00:31:25.706 "data_size": 65536 00:31:25.706 }, 00:31:25.706 { 00:31:25.706 "name": "BaseBdev2", 00:31:25.706 "uuid": "18d13057-86ad-4341-8337-996f4168fea1", 00:31:25.706 "is_configured": true, 00:31:25.706 "data_offset": 0, 00:31:25.706 "data_size": 65536 00:31:25.706 }, 00:31:25.706 { 00:31:25.706 "name": "BaseBdev3", 00:31:25.706 "uuid": "4dabe5bc-8c77-47b1-b37e-67e8b3bef99c", 00:31:25.706 "is_configured": true, 00:31:25.706 "data_offset": 0, 00:31:25.706 "data_size": 65536 00:31:25.706 } 00:31:25.706 ] 00:31:25.706 }' 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:25.706 13:42:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2a069743-f591-499a-a3ab-0b3f8b7ff933 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.274 NewBaseBdev 00:31:26.274 [2024-10-28 13:42:40.376330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:26.274 [2024-10-28 13:42:40.376399] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:26.274 [2024-10-28 13:42:40.376414] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:26.274 [2024-10-28 13:42:40.376725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:31:26.274 [2024-10-28 13:42:40.377285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:26.274 [2024-10-28 13:42:40.377311] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:26.274 [2024-10-28 13:42:40.377557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.274 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.274 [ 00:31:26.274 { 00:31:26.274 "name": "NewBaseBdev", 00:31:26.274 "aliases": [ 00:31:26.274 "2a069743-f591-499a-a3ab-0b3f8b7ff933" 00:31:26.274 ], 00:31:26.274 "product_name": "Malloc disk", 00:31:26.274 "block_size": 512, 00:31:26.274 "num_blocks": 65536, 00:31:26.274 "uuid": "2a069743-f591-499a-a3ab-0b3f8b7ff933", 00:31:26.274 "assigned_rate_limits": { 00:31:26.274 "rw_ios_per_sec": 0, 00:31:26.274 "rw_mbytes_per_sec": 0, 00:31:26.274 "r_mbytes_per_sec": 0, 00:31:26.274 "w_mbytes_per_sec": 0 00:31:26.274 }, 00:31:26.274 "claimed": true, 00:31:26.274 "claim_type": "exclusive_write", 00:31:26.274 "zoned": false, 00:31:26.274 "supported_io_types": { 00:31:26.274 "read": true, 00:31:26.274 "write": true, 00:31:26.274 "unmap": true, 00:31:26.274 "flush": true, 00:31:26.274 "reset": true, 00:31:26.274 "nvme_admin": false, 00:31:26.274 "nvme_io": false, 00:31:26.274 "nvme_io_md": false, 00:31:26.274 "write_zeroes": true, 00:31:26.274 "zcopy": true, 00:31:26.275 "get_zone_info": false, 00:31:26.275 "zone_management": false, 00:31:26.275 "zone_append": false, 00:31:26.275 "compare": false, 00:31:26.275 "compare_and_write": false, 00:31:26.275 "abort": true, 00:31:26.275 "seek_hole": false, 00:31:26.275 "seek_data": false, 00:31:26.275 "copy": true, 00:31:26.275 "nvme_iov_md": false 00:31:26.275 }, 00:31:26.275 "memory_domains": [ 00:31:26.275 { 00:31:26.275 "dma_device_id": "system", 00:31:26.275 "dma_device_type": 1 00:31:26.275 }, 00:31:26.275 { 00:31:26.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:26.275 "dma_device_type": 2 00:31:26.275 } 00:31:26.275 ], 00:31:26.275 "driver_specific": {} 00:31:26.275 } 00:31:26.275 ] 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.275 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.533 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:26.533 "name": "Existed_Raid", 00:31:26.533 "uuid": "1241a96e-0daa-4247-a52a-bfb3655d0a6f", 00:31:26.533 "strip_size_kb": 64, 00:31:26.533 "state": "online", 00:31:26.533 "raid_level": "raid5f", 00:31:26.533 "superblock": false, 00:31:26.533 "num_base_bdevs": 3, 00:31:26.533 "num_base_bdevs_discovered": 3, 00:31:26.533 "num_base_bdevs_operational": 3, 00:31:26.533 "base_bdevs_list": [ 00:31:26.533 { 00:31:26.533 "name": "NewBaseBdev", 00:31:26.533 "uuid": "2a069743-f591-499a-a3ab-0b3f8b7ff933", 00:31:26.533 "is_configured": true, 00:31:26.533 "data_offset": 0, 00:31:26.533 "data_size": 65536 00:31:26.533 }, 00:31:26.533 { 00:31:26.533 "name": "BaseBdev2", 00:31:26.533 "uuid": "18d13057-86ad-4341-8337-996f4168fea1", 00:31:26.533 "is_configured": true, 00:31:26.533 "data_offset": 0, 00:31:26.533 "data_size": 65536 00:31:26.533 }, 00:31:26.533 { 00:31:26.533 "name": "BaseBdev3", 00:31:26.534 "uuid": "4dabe5bc-8c77-47b1-b37e-67e8b3bef99c", 00:31:26.534 "is_configured": true, 00:31:26.534 "data_offset": 0, 00:31:26.534 "data_size": 65536 00:31:26.534 } 00:31:26.534 ] 00:31:26.534 }' 00:31:26.534 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:26.534 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.793 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:31:26.793 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:26.793 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:26.793 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:26.793 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:26.794 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:26.794 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:26.794 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:26.794 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.794 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.794 [2024-10-28 13:42:40.936798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:27.053 13:42:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.053 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:27.053 "name": "Existed_Raid", 00:31:27.053 "aliases": [ 00:31:27.053 "1241a96e-0daa-4247-a52a-bfb3655d0a6f" 00:31:27.053 ], 00:31:27.053 "product_name": "Raid Volume", 00:31:27.053 "block_size": 512, 00:31:27.053 "num_blocks": 131072, 00:31:27.053 "uuid": "1241a96e-0daa-4247-a52a-bfb3655d0a6f", 00:31:27.053 "assigned_rate_limits": { 00:31:27.053 "rw_ios_per_sec": 0, 00:31:27.053 "rw_mbytes_per_sec": 0, 00:31:27.053 "r_mbytes_per_sec": 0, 00:31:27.053 "w_mbytes_per_sec": 0 00:31:27.053 }, 00:31:27.053 "claimed": false, 00:31:27.053 "zoned": false, 00:31:27.053 "supported_io_types": { 00:31:27.053 "read": true, 00:31:27.053 "write": true, 00:31:27.053 "unmap": false, 00:31:27.053 "flush": false, 00:31:27.053 "reset": true, 00:31:27.053 "nvme_admin": false, 00:31:27.053 "nvme_io": false, 00:31:27.053 "nvme_io_md": false, 00:31:27.053 "write_zeroes": true, 00:31:27.053 "zcopy": false, 00:31:27.053 "get_zone_info": false, 00:31:27.053 "zone_management": false, 00:31:27.053 "zone_append": false, 00:31:27.053 "compare": false, 00:31:27.053 "compare_and_write": false, 00:31:27.053 "abort": false, 00:31:27.053 "seek_hole": false, 00:31:27.053 "seek_data": false, 00:31:27.053 "copy": false, 00:31:27.053 "nvme_iov_md": false 00:31:27.053 }, 00:31:27.053 "driver_specific": { 00:31:27.053 "raid": { 00:31:27.053 "uuid": "1241a96e-0daa-4247-a52a-bfb3655d0a6f", 00:31:27.053 "strip_size_kb": 64, 00:31:27.053 "state": "online", 00:31:27.053 "raid_level": "raid5f", 00:31:27.053 "superblock": false, 00:31:27.053 "num_base_bdevs": 3, 00:31:27.053 "num_base_bdevs_discovered": 3, 00:31:27.053 "num_base_bdevs_operational": 3, 00:31:27.053 "base_bdevs_list": [ 00:31:27.053 { 00:31:27.053 "name": "NewBaseBdev", 00:31:27.053 "uuid": "2a069743-f591-499a-a3ab-0b3f8b7ff933", 00:31:27.053 "is_configured": true, 00:31:27.053 "data_offset": 0, 00:31:27.053 "data_size": 65536 00:31:27.053 }, 00:31:27.053 { 00:31:27.053 "name": "BaseBdev2", 00:31:27.053 "uuid": "18d13057-86ad-4341-8337-996f4168fea1", 00:31:27.053 "is_configured": true, 00:31:27.053 "data_offset": 0, 00:31:27.053 "data_size": 65536 00:31:27.053 }, 00:31:27.053 { 00:31:27.053 "name": "BaseBdev3", 00:31:27.053 "uuid": "4dabe5bc-8c77-47b1-b37e-67e8b3bef99c", 00:31:27.053 "is_configured": true, 00:31:27.053 "data_offset": 0, 00:31:27.053 "data_size": 65536 00:31:27.053 } 00:31:27.053 ] 00:31:27.053 } 00:31:27.053 } 00:31:27.053 }' 00:31:27.053 13:42:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:31:27.053 BaseBdev2 00:31:27.053 BaseBdev3' 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.053 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.312 [2024-10-28 13:42:41.252850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:27.312 [2024-10-28 13:42:41.253008] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:27.312 [2024-10-28 13:42:41.253232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:27.312 [2024-10-28 13:42:41.253708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:27.312 [2024-10-28 13:42:41.253741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 92657 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 92657 ']' 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 92657 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92657 00:31:27.312 killing process with pid 92657 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92657' 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 92657 00:31:27.312 [2024-10-28 13:42:41.294006] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:27.312 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 92657 00:31:27.312 [2024-10-28 13:42:41.329099] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:27.571 13:42:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:31:27.571 00:31:27.571 real 0m10.253s 00:31:27.571 user 0m18.017s 00:31:27.571 sys 0m1.645s 00:31:27.571 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:27.571 13:42:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.571 ************************************ 00:31:27.571 END TEST raid5f_state_function_test 00:31:27.571 ************************************ 00:31:27.572 13:42:41 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:31:27.572 13:42:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:27.572 13:42:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:27.572 13:42:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:27.572 ************************************ 00:31:27.572 START TEST raid5f_state_function_test_sb 00:31:27.572 ************************************ 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93273 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93273' 00:31:27.572 Process raid pid: 93273 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93273 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 93273 ']' 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:27.572 13:42:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.831 [2024-10-28 13:42:41.749792] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:31:27.831 [2024-10-28 13:42:41.750987] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.831 [2024-10-28 13:42:41.911733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:27.831 [2024-10-28 13:42:41.940521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.089 [2024-10-28 13:42:41.995019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.089 [2024-10-28 13:42:42.052998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:28.089 [2024-10-28 13:42:42.053281] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:28.657 [2024-10-28 13:42:42.759982] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:28.657 [2024-10-28 13:42:42.760222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:28.657 [2024-10-28 13:42:42.760406] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:28.657 [2024-10-28 13:42:42.760465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:28.657 [2024-10-28 13:42:42.760603] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:28.657 [2024-10-28 13:42:42.760631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:28.657 13:42:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.916 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:28.916 "name": "Existed_Raid", 00:31:28.916 "uuid": "ec384b3a-cefc-4f8b-9ec2-56063f46d2fe", 00:31:28.916 "strip_size_kb": 64, 00:31:28.916 "state": "configuring", 00:31:28.916 "raid_level": "raid5f", 00:31:28.916 "superblock": true, 00:31:28.916 "num_base_bdevs": 3, 00:31:28.916 "num_base_bdevs_discovered": 0, 00:31:28.916 "num_base_bdevs_operational": 3, 00:31:28.916 "base_bdevs_list": [ 00:31:28.916 { 00:31:28.916 "name": "BaseBdev1", 00:31:28.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.916 "is_configured": false, 00:31:28.916 "data_offset": 0, 00:31:28.916 "data_size": 0 00:31:28.916 }, 00:31:28.916 { 00:31:28.916 "name": "BaseBdev2", 00:31:28.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.916 "is_configured": false, 00:31:28.916 "data_offset": 0, 00:31:28.916 "data_size": 0 00:31:28.916 }, 00:31:28.916 { 00:31:28.916 "name": "BaseBdev3", 00:31:28.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.916 "is_configured": false, 00:31:28.916 "data_offset": 0, 00:31:28.916 "data_size": 0 00:31:28.916 } 00:31:28.916 ] 00:31:28.916 }' 00:31:28.916 13:42:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:28.917 13:42:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.175 [2024-10-28 13:42:43.260018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:29.175 [2024-10-28 13:42:43.260203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.175 [2024-10-28 13:42:43.272009] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:29.175 [2024-10-28 13:42:43.272214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:29.175 [2024-10-28 13:42:43.272355] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:29.175 [2024-10-28 13:42:43.272412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:29.175 [2024-10-28 13:42:43.272548] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:29.175 [2024-10-28 13:42:43.272612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.175 [2024-10-28 13:42:43.292437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:29.175 BaseBdev1 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.175 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.175 [ 00:31:29.176 { 00:31:29.176 "name": "BaseBdev1", 00:31:29.176 "aliases": [ 00:31:29.176 "66823c65-af1d-470f-aa7f-a8c4ae03dd9d" 00:31:29.176 ], 00:31:29.176 "product_name": "Malloc disk", 00:31:29.176 "block_size": 512, 00:31:29.176 "num_blocks": 65536, 00:31:29.176 "uuid": "66823c65-af1d-470f-aa7f-a8c4ae03dd9d", 00:31:29.176 "assigned_rate_limits": { 00:31:29.176 "rw_ios_per_sec": 0, 00:31:29.176 "rw_mbytes_per_sec": 0, 00:31:29.176 "r_mbytes_per_sec": 0, 00:31:29.176 "w_mbytes_per_sec": 0 00:31:29.176 }, 00:31:29.176 "claimed": true, 00:31:29.176 "claim_type": "exclusive_write", 00:31:29.176 "zoned": false, 00:31:29.176 "supported_io_types": { 00:31:29.176 "read": true, 00:31:29.176 "write": true, 00:31:29.176 "unmap": true, 00:31:29.176 "flush": true, 00:31:29.176 "reset": true, 00:31:29.176 "nvme_admin": false, 00:31:29.176 "nvme_io": false, 00:31:29.176 "nvme_io_md": false, 00:31:29.176 "write_zeroes": true, 00:31:29.176 "zcopy": true, 00:31:29.176 "get_zone_info": false, 00:31:29.176 "zone_management": false, 00:31:29.176 "zone_append": false, 00:31:29.176 "compare": false, 00:31:29.176 "compare_and_write": false, 00:31:29.176 "abort": true, 00:31:29.176 "seek_hole": false, 00:31:29.176 "seek_data": false, 00:31:29.176 "copy": true, 00:31:29.176 "nvme_iov_md": false 00:31:29.176 }, 00:31:29.176 "memory_domains": [ 00:31:29.176 { 00:31:29.176 "dma_device_id": "system", 00:31:29.176 "dma_device_type": 1 00:31:29.176 }, 00:31:29.176 { 00:31:29.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:29.176 "dma_device_type": 2 00:31:29.176 } 00:31:29.176 ], 00:31:29.176 "driver_specific": {} 00:31:29.176 } 00:31:29.176 ] 00:31:29.176 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.176 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:29.176 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:29.176 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:29.176 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:29.176 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:29.176 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:29.176 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:29.176 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:29.176 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:29.176 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:29.176 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:29.434 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:29.434 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.434 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.434 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:29.434 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.434 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:29.434 "name": "Existed_Raid", 00:31:29.434 "uuid": "664eb1c2-a8bf-44c5-a20c-f2b6f53e836e", 00:31:29.434 "strip_size_kb": 64, 00:31:29.434 "state": "configuring", 00:31:29.434 "raid_level": "raid5f", 00:31:29.434 "superblock": true, 00:31:29.434 "num_base_bdevs": 3, 00:31:29.434 "num_base_bdevs_discovered": 1, 00:31:29.434 "num_base_bdevs_operational": 3, 00:31:29.434 "base_bdevs_list": [ 00:31:29.434 { 00:31:29.434 "name": "BaseBdev1", 00:31:29.434 "uuid": "66823c65-af1d-470f-aa7f-a8c4ae03dd9d", 00:31:29.434 "is_configured": true, 00:31:29.434 "data_offset": 2048, 00:31:29.434 "data_size": 63488 00:31:29.434 }, 00:31:29.434 { 00:31:29.434 "name": "BaseBdev2", 00:31:29.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.434 "is_configured": false, 00:31:29.434 "data_offset": 0, 00:31:29.434 "data_size": 0 00:31:29.434 }, 00:31:29.434 { 00:31:29.434 "name": "BaseBdev3", 00:31:29.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.434 "is_configured": false, 00:31:29.434 "data_offset": 0, 00:31:29.434 "data_size": 0 00:31:29.434 } 00:31:29.434 ] 00:31:29.434 }' 00:31:29.434 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:29.434 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.693 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:29.693 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.693 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.693 [2024-10-28 13:42:43.840668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:29.693 [2024-10-28 13:42:43.840740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:29.693 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.693 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:29.693 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.693 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.693 [2024-10-28 13:42:43.848688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:29.952 [2024-10-28 13:42:43.851599] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:29.952 [2024-10-28 13:42:43.851765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:29.952 [2024-10-28 13:42:43.851891] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:29.952 [2024-10-28 13:42:43.852011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:29.952 "name": "Existed_Raid", 00:31:29.952 "uuid": "e03468dc-bb56-42d2-be98-01433c8fecf6", 00:31:29.952 "strip_size_kb": 64, 00:31:29.952 "state": "configuring", 00:31:29.952 "raid_level": "raid5f", 00:31:29.952 "superblock": true, 00:31:29.952 "num_base_bdevs": 3, 00:31:29.952 "num_base_bdevs_discovered": 1, 00:31:29.952 "num_base_bdevs_operational": 3, 00:31:29.952 "base_bdevs_list": [ 00:31:29.952 { 00:31:29.952 "name": "BaseBdev1", 00:31:29.952 "uuid": "66823c65-af1d-470f-aa7f-a8c4ae03dd9d", 00:31:29.952 "is_configured": true, 00:31:29.952 "data_offset": 2048, 00:31:29.952 "data_size": 63488 00:31:29.952 }, 00:31:29.952 { 00:31:29.952 "name": "BaseBdev2", 00:31:29.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.952 "is_configured": false, 00:31:29.952 "data_offset": 0, 00:31:29.952 "data_size": 0 00:31:29.952 }, 00:31:29.952 { 00:31:29.952 "name": "BaseBdev3", 00:31:29.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.952 "is_configured": false, 00:31:29.952 "data_offset": 0, 00:31:29.952 "data_size": 0 00:31:29.952 } 00:31:29.952 ] 00:31:29.952 }' 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:29.952 13:42:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:30.211 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:30.211 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.211 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:30.470 BaseBdev2 00:31:30.470 [2024-10-28 13:42:44.375541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:30.470 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.470 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:30.470 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:30.470 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:30.470 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:30.470 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:30.470 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:30.470 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:30.470 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.470 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:30.470 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.470 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:30.470 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.470 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:30.470 [ 00:31:30.470 { 00:31:30.470 "name": "BaseBdev2", 00:31:30.470 "aliases": [ 00:31:30.470 "5cd2c4dd-91ff-4126-a269-ec9f95fd6ebb" 00:31:30.470 ], 00:31:30.470 "product_name": "Malloc disk", 00:31:30.470 "block_size": 512, 00:31:30.470 "num_blocks": 65536, 00:31:30.470 "uuid": "5cd2c4dd-91ff-4126-a269-ec9f95fd6ebb", 00:31:30.470 "assigned_rate_limits": { 00:31:30.470 "rw_ios_per_sec": 0, 00:31:30.470 "rw_mbytes_per_sec": 0, 00:31:30.470 "r_mbytes_per_sec": 0, 00:31:30.470 "w_mbytes_per_sec": 0 00:31:30.470 }, 00:31:30.471 "claimed": true, 00:31:30.471 "claim_type": "exclusive_write", 00:31:30.471 "zoned": false, 00:31:30.471 "supported_io_types": { 00:31:30.471 "read": true, 00:31:30.471 "write": true, 00:31:30.471 "unmap": true, 00:31:30.471 "flush": true, 00:31:30.471 "reset": true, 00:31:30.471 "nvme_admin": false, 00:31:30.471 "nvme_io": false, 00:31:30.471 "nvme_io_md": false, 00:31:30.471 "write_zeroes": true, 00:31:30.471 "zcopy": true, 00:31:30.471 "get_zone_info": false, 00:31:30.471 "zone_management": false, 00:31:30.471 "zone_append": false, 00:31:30.471 "compare": false, 00:31:30.471 "compare_and_write": false, 00:31:30.471 "abort": true, 00:31:30.471 "seek_hole": false, 00:31:30.471 "seek_data": false, 00:31:30.471 "copy": true, 00:31:30.471 "nvme_iov_md": false 00:31:30.471 }, 00:31:30.471 "memory_domains": [ 00:31:30.471 { 00:31:30.471 "dma_device_id": "system", 00:31:30.471 "dma_device_type": 1 00:31:30.471 }, 00:31:30.471 { 00:31:30.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:30.471 "dma_device_type": 2 00:31:30.471 } 00:31:30.471 ], 00:31:30.471 "driver_specific": {} 00:31:30.471 } 00:31:30.471 ] 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:30.471 "name": "Existed_Raid", 00:31:30.471 "uuid": "e03468dc-bb56-42d2-be98-01433c8fecf6", 00:31:30.471 "strip_size_kb": 64, 00:31:30.471 "state": "configuring", 00:31:30.471 "raid_level": "raid5f", 00:31:30.471 "superblock": true, 00:31:30.471 "num_base_bdevs": 3, 00:31:30.471 "num_base_bdevs_discovered": 2, 00:31:30.471 "num_base_bdevs_operational": 3, 00:31:30.471 "base_bdevs_list": [ 00:31:30.471 { 00:31:30.471 "name": "BaseBdev1", 00:31:30.471 "uuid": "66823c65-af1d-470f-aa7f-a8c4ae03dd9d", 00:31:30.471 "is_configured": true, 00:31:30.471 "data_offset": 2048, 00:31:30.471 "data_size": 63488 00:31:30.471 }, 00:31:30.471 { 00:31:30.471 "name": "BaseBdev2", 00:31:30.471 "uuid": "5cd2c4dd-91ff-4126-a269-ec9f95fd6ebb", 00:31:30.471 "is_configured": true, 00:31:30.471 "data_offset": 2048, 00:31:30.471 "data_size": 63488 00:31:30.471 }, 00:31:30.471 { 00:31:30.471 "name": "BaseBdev3", 00:31:30.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:30.471 "is_configured": false, 00:31:30.471 "data_offset": 0, 00:31:30.471 "data_size": 0 00:31:30.471 } 00:31:30.471 ] 00:31:30.471 }' 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:30.471 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.037 [2024-10-28 13:42:44.963167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:31.037 BaseBdev3 00:31:31.037 [2024-10-28 13:42:44.963699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:31:31.037 [2024-10-28 13:42:44.963726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:31.037 [2024-10-28 13:42:44.964079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:31:31.037 [2024-10-28 13:42:44.964655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.037 [2024-10-28 13:42:44.964681] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:31:31.037 [2024-10-28 13:42:44.964835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.037 [ 00:31:31.037 { 00:31:31.037 "name": "BaseBdev3", 00:31:31.037 "aliases": [ 00:31:31.037 "a456a78c-0c02-4a67-9e1c-054f35e58776" 00:31:31.037 ], 00:31:31.037 "product_name": "Malloc disk", 00:31:31.037 "block_size": 512, 00:31:31.037 "num_blocks": 65536, 00:31:31.037 "uuid": "a456a78c-0c02-4a67-9e1c-054f35e58776", 00:31:31.037 "assigned_rate_limits": { 00:31:31.037 "rw_ios_per_sec": 0, 00:31:31.037 "rw_mbytes_per_sec": 0, 00:31:31.037 "r_mbytes_per_sec": 0, 00:31:31.037 "w_mbytes_per_sec": 0 00:31:31.037 }, 00:31:31.037 "claimed": true, 00:31:31.037 "claim_type": "exclusive_write", 00:31:31.037 "zoned": false, 00:31:31.037 "supported_io_types": { 00:31:31.037 "read": true, 00:31:31.037 "write": true, 00:31:31.037 "unmap": true, 00:31:31.037 "flush": true, 00:31:31.037 "reset": true, 00:31:31.037 "nvme_admin": false, 00:31:31.037 "nvme_io": false, 00:31:31.037 "nvme_io_md": false, 00:31:31.037 "write_zeroes": true, 00:31:31.037 "zcopy": true, 00:31:31.037 "get_zone_info": false, 00:31:31.037 "zone_management": false, 00:31:31.037 "zone_append": false, 00:31:31.037 "compare": false, 00:31:31.037 "compare_and_write": false, 00:31:31.037 "abort": true, 00:31:31.037 "seek_hole": false, 00:31:31.037 "seek_data": false, 00:31:31.037 "copy": true, 00:31:31.037 "nvme_iov_md": false 00:31:31.037 }, 00:31:31.037 "memory_domains": [ 00:31:31.037 { 00:31:31.037 "dma_device_id": "system", 00:31:31.037 "dma_device_type": 1 00:31:31.037 }, 00:31:31.037 { 00:31:31.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:31.037 "dma_device_type": 2 00:31:31.037 } 00:31:31.037 ], 00:31:31.037 "driver_specific": {} 00:31:31.037 } 00:31:31.037 ] 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:31.037 13:42:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:31.037 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:31.037 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:31.037 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.037 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.037 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.037 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:31.037 "name": "Existed_Raid", 00:31:31.037 "uuid": "e03468dc-bb56-42d2-be98-01433c8fecf6", 00:31:31.037 "strip_size_kb": 64, 00:31:31.037 "state": "online", 00:31:31.037 "raid_level": "raid5f", 00:31:31.037 "superblock": true, 00:31:31.037 "num_base_bdevs": 3, 00:31:31.037 "num_base_bdevs_discovered": 3, 00:31:31.037 "num_base_bdevs_operational": 3, 00:31:31.037 "base_bdevs_list": [ 00:31:31.037 { 00:31:31.037 "name": "BaseBdev1", 00:31:31.037 "uuid": "66823c65-af1d-470f-aa7f-a8c4ae03dd9d", 00:31:31.037 "is_configured": true, 00:31:31.037 "data_offset": 2048, 00:31:31.037 "data_size": 63488 00:31:31.037 }, 00:31:31.037 { 00:31:31.037 "name": "BaseBdev2", 00:31:31.037 "uuid": "5cd2c4dd-91ff-4126-a269-ec9f95fd6ebb", 00:31:31.037 "is_configured": true, 00:31:31.037 "data_offset": 2048, 00:31:31.037 "data_size": 63488 00:31:31.037 }, 00:31:31.037 { 00:31:31.037 "name": "BaseBdev3", 00:31:31.037 "uuid": "a456a78c-0c02-4a67-9e1c-054f35e58776", 00:31:31.037 "is_configured": true, 00:31:31.037 "data_offset": 2048, 00:31:31.037 "data_size": 63488 00:31:31.037 } 00:31:31.037 ] 00:31:31.037 }' 00:31:31.037 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:31.037 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.604 [2024-10-28 13:42:45.519693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:31.604 "name": "Existed_Raid", 00:31:31.604 "aliases": [ 00:31:31.604 "e03468dc-bb56-42d2-be98-01433c8fecf6" 00:31:31.604 ], 00:31:31.604 "product_name": "Raid Volume", 00:31:31.604 "block_size": 512, 00:31:31.604 "num_blocks": 126976, 00:31:31.604 "uuid": "e03468dc-bb56-42d2-be98-01433c8fecf6", 00:31:31.604 "assigned_rate_limits": { 00:31:31.604 "rw_ios_per_sec": 0, 00:31:31.604 "rw_mbytes_per_sec": 0, 00:31:31.604 "r_mbytes_per_sec": 0, 00:31:31.604 "w_mbytes_per_sec": 0 00:31:31.604 }, 00:31:31.604 "claimed": false, 00:31:31.604 "zoned": false, 00:31:31.604 "supported_io_types": { 00:31:31.604 "read": true, 00:31:31.604 "write": true, 00:31:31.604 "unmap": false, 00:31:31.604 "flush": false, 00:31:31.604 "reset": true, 00:31:31.604 "nvme_admin": false, 00:31:31.604 "nvme_io": false, 00:31:31.604 "nvme_io_md": false, 00:31:31.604 "write_zeroes": true, 00:31:31.604 "zcopy": false, 00:31:31.604 "get_zone_info": false, 00:31:31.604 "zone_management": false, 00:31:31.604 "zone_append": false, 00:31:31.604 "compare": false, 00:31:31.604 "compare_and_write": false, 00:31:31.604 "abort": false, 00:31:31.604 "seek_hole": false, 00:31:31.604 "seek_data": false, 00:31:31.604 "copy": false, 00:31:31.604 "nvme_iov_md": false 00:31:31.604 }, 00:31:31.604 "driver_specific": { 00:31:31.604 "raid": { 00:31:31.604 "uuid": "e03468dc-bb56-42d2-be98-01433c8fecf6", 00:31:31.604 "strip_size_kb": 64, 00:31:31.604 "state": "online", 00:31:31.604 "raid_level": "raid5f", 00:31:31.604 "superblock": true, 00:31:31.604 "num_base_bdevs": 3, 00:31:31.604 "num_base_bdevs_discovered": 3, 00:31:31.604 "num_base_bdevs_operational": 3, 00:31:31.604 "base_bdevs_list": [ 00:31:31.604 { 00:31:31.604 "name": "BaseBdev1", 00:31:31.604 "uuid": "66823c65-af1d-470f-aa7f-a8c4ae03dd9d", 00:31:31.604 "is_configured": true, 00:31:31.604 "data_offset": 2048, 00:31:31.604 "data_size": 63488 00:31:31.604 }, 00:31:31.604 { 00:31:31.604 "name": "BaseBdev2", 00:31:31.604 "uuid": "5cd2c4dd-91ff-4126-a269-ec9f95fd6ebb", 00:31:31.604 "is_configured": true, 00:31:31.604 "data_offset": 2048, 00:31:31.604 "data_size": 63488 00:31:31.604 }, 00:31:31.604 { 00:31:31.604 "name": "BaseBdev3", 00:31:31.604 "uuid": "a456a78c-0c02-4a67-9e1c-054f35e58776", 00:31:31.604 "is_configured": true, 00:31:31.604 "data_offset": 2048, 00:31:31.604 "data_size": 63488 00:31:31.604 } 00:31:31.604 ] 00:31:31.604 } 00:31:31.604 } 00:31:31.604 }' 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:31.604 BaseBdev2 00:31:31.604 BaseBdev3' 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.604 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.910 [2024-10-28 13:42:45.831545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:31.910 "name": "Existed_Raid", 00:31:31.910 "uuid": "e03468dc-bb56-42d2-be98-01433c8fecf6", 00:31:31.910 "strip_size_kb": 64, 00:31:31.910 "state": "online", 00:31:31.910 "raid_level": "raid5f", 00:31:31.910 "superblock": true, 00:31:31.910 "num_base_bdevs": 3, 00:31:31.910 "num_base_bdevs_discovered": 2, 00:31:31.910 "num_base_bdevs_operational": 2, 00:31:31.910 "base_bdevs_list": [ 00:31:31.910 { 00:31:31.910 "name": null, 00:31:31.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:31.910 "is_configured": false, 00:31:31.910 "data_offset": 0, 00:31:31.910 "data_size": 63488 00:31:31.910 }, 00:31:31.910 { 00:31:31.910 "name": "BaseBdev2", 00:31:31.910 "uuid": "5cd2c4dd-91ff-4126-a269-ec9f95fd6ebb", 00:31:31.910 "is_configured": true, 00:31:31.910 "data_offset": 2048, 00:31:31.910 "data_size": 63488 00:31:31.910 }, 00:31:31.910 { 00:31:31.910 "name": "BaseBdev3", 00:31:31.910 "uuid": "a456a78c-0c02-4a67-9e1c-054f35e58776", 00:31:31.910 "is_configured": true, 00:31:31.910 "data_offset": 2048, 00:31:31.910 "data_size": 63488 00:31:31.910 } 00:31:31.910 ] 00:31:31.910 }' 00:31:31.910 13:42:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:31.911 13:42:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.478 [2024-10-28 13:42:46.423425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:32.478 [2024-10-28 13:42:46.423749] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:32.478 [2024-10-28 13:42:46.434922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.478 [2024-10-28 13:42:46.490994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:32.478 [2024-10-28 13:42:46.491240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.478 BaseBdev2 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.478 [ 00:31:32.478 { 00:31:32.478 "name": "BaseBdev2", 00:31:32.478 "aliases": [ 00:31:32.478 "f1644640-c8cf-49d9-9560-18188fd5b5ed" 00:31:32.478 ], 00:31:32.478 "product_name": "Malloc disk", 00:31:32.478 "block_size": 512, 00:31:32.478 "num_blocks": 65536, 00:31:32.478 "uuid": "f1644640-c8cf-49d9-9560-18188fd5b5ed", 00:31:32.478 "assigned_rate_limits": { 00:31:32.478 "rw_ios_per_sec": 0, 00:31:32.478 "rw_mbytes_per_sec": 0, 00:31:32.478 "r_mbytes_per_sec": 0, 00:31:32.478 "w_mbytes_per_sec": 0 00:31:32.478 }, 00:31:32.478 "claimed": false, 00:31:32.478 "zoned": false, 00:31:32.478 "supported_io_types": { 00:31:32.478 "read": true, 00:31:32.478 "write": true, 00:31:32.478 "unmap": true, 00:31:32.478 "flush": true, 00:31:32.478 "reset": true, 00:31:32.478 "nvme_admin": false, 00:31:32.478 "nvme_io": false, 00:31:32.478 "nvme_io_md": false, 00:31:32.478 "write_zeroes": true, 00:31:32.478 "zcopy": true, 00:31:32.478 "get_zone_info": false, 00:31:32.478 "zone_management": false, 00:31:32.478 "zone_append": false, 00:31:32.478 "compare": false, 00:31:32.478 "compare_and_write": false, 00:31:32.478 "abort": true, 00:31:32.478 "seek_hole": false, 00:31:32.478 "seek_data": false, 00:31:32.478 "copy": true, 00:31:32.478 "nvme_iov_md": false 00:31:32.478 }, 00:31:32.478 "memory_domains": [ 00:31:32.478 { 00:31:32.478 "dma_device_id": "system", 00:31:32.478 "dma_device_type": 1 00:31:32.478 }, 00:31:32.478 { 00:31:32.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:32.478 "dma_device_type": 2 00:31:32.478 } 00:31:32.478 ], 00:31:32.478 "driver_specific": {} 00:31:32.478 } 00:31:32.478 ] 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.478 BaseBdev3 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.478 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.735 [ 00:31:32.735 { 00:31:32.735 "name": "BaseBdev3", 00:31:32.735 "aliases": [ 00:31:32.735 "fc7e13b8-aae6-481c-b718-90a681174cdf" 00:31:32.735 ], 00:31:32.735 "product_name": "Malloc disk", 00:31:32.735 "block_size": 512, 00:31:32.735 "num_blocks": 65536, 00:31:32.735 "uuid": "fc7e13b8-aae6-481c-b718-90a681174cdf", 00:31:32.735 "assigned_rate_limits": { 00:31:32.735 "rw_ios_per_sec": 0, 00:31:32.735 "rw_mbytes_per_sec": 0, 00:31:32.735 "r_mbytes_per_sec": 0, 00:31:32.735 "w_mbytes_per_sec": 0 00:31:32.735 }, 00:31:32.735 "claimed": false, 00:31:32.735 "zoned": false, 00:31:32.735 "supported_io_types": { 00:31:32.735 "read": true, 00:31:32.735 "write": true, 00:31:32.735 "unmap": true, 00:31:32.735 "flush": true, 00:31:32.735 "reset": true, 00:31:32.735 "nvme_admin": false, 00:31:32.735 "nvme_io": false, 00:31:32.735 "nvme_io_md": false, 00:31:32.735 "write_zeroes": true, 00:31:32.735 "zcopy": true, 00:31:32.735 "get_zone_info": false, 00:31:32.735 "zone_management": false, 00:31:32.735 "zone_append": false, 00:31:32.735 "compare": false, 00:31:32.735 "compare_and_write": false, 00:31:32.735 "abort": true, 00:31:32.735 "seek_hole": false, 00:31:32.735 "seek_data": false, 00:31:32.735 "copy": true, 00:31:32.735 "nvme_iov_md": false 00:31:32.735 }, 00:31:32.735 "memory_domains": [ 00:31:32.735 { 00:31:32.735 "dma_device_id": "system", 00:31:32.735 "dma_device_type": 1 00:31:32.735 }, 00:31:32.735 { 00:31:32.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:32.735 "dma_device_type": 2 00:31:32.735 } 00:31:32.735 ], 00:31:32.735 "driver_specific": {} 00:31:32.735 } 00:31:32.735 ] 00:31:32.735 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.735 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.736 [2024-10-28 13:42:46.655941] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:32.736 [2024-10-28 13:42:46.656017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:32.736 [2024-10-28 13:42:46.656043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:32.736 [2024-10-28 13:42:46.658637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:32.736 "name": "Existed_Raid", 00:31:32.736 "uuid": "ff246661-07a8-40c1-9491-7312f78f7d5e", 00:31:32.736 "strip_size_kb": 64, 00:31:32.736 "state": "configuring", 00:31:32.736 "raid_level": "raid5f", 00:31:32.736 "superblock": true, 00:31:32.736 "num_base_bdevs": 3, 00:31:32.736 "num_base_bdevs_discovered": 2, 00:31:32.736 "num_base_bdevs_operational": 3, 00:31:32.736 "base_bdevs_list": [ 00:31:32.736 { 00:31:32.736 "name": "BaseBdev1", 00:31:32.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:32.736 "is_configured": false, 00:31:32.736 "data_offset": 0, 00:31:32.736 "data_size": 0 00:31:32.736 }, 00:31:32.736 { 00:31:32.736 "name": "BaseBdev2", 00:31:32.736 "uuid": "f1644640-c8cf-49d9-9560-18188fd5b5ed", 00:31:32.736 "is_configured": true, 00:31:32.736 "data_offset": 2048, 00:31:32.736 "data_size": 63488 00:31:32.736 }, 00:31:32.736 { 00:31:32.736 "name": "BaseBdev3", 00:31:32.736 "uuid": "fc7e13b8-aae6-481c-b718-90a681174cdf", 00:31:32.736 "is_configured": true, 00:31:32.736 "data_offset": 2048, 00:31:32.736 "data_size": 63488 00:31:32.736 } 00:31:32.736 ] 00:31:32.736 }' 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:32.736 13:42:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.315 [2024-10-28 13:42:47.164079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:33.315 "name": "Existed_Raid", 00:31:33.315 "uuid": "ff246661-07a8-40c1-9491-7312f78f7d5e", 00:31:33.315 "strip_size_kb": 64, 00:31:33.315 "state": "configuring", 00:31:33.315 "raid_level": "raid5f", 00:31:33.315 "superblock": true, 00:31:33.315 "num_base_bdevs": 3, 00:31:33.315 "num_base_bdevs_discovered": 1, 00:31:33.315 "num_base_bdevs_operational": 3, 00:31:33.315 "base_bdevs_list": [ 00:31:33.315 { 00:31:33.315 "name": "BaseBdev1", 00:31:33.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:33.315 "is_configured": false, 00:31:33.315 "data_offset": 0, 00:31:33.315 "data_size": 0 00:31:33.315 }, 00:31:33.315 { 00:31:33.315 "name": null, 00:31:33.315 "uuid": "f1644640-c8cf-49d9-9560-18188fd5b5ed", 00:31:33.315 "is_configured": false, 00:31:33.315 "data_offset": 0, 00:31:33.315 "data_size": 63488 00:31:33.315 }, 00:31:33.315 { 00:31:33.315 "name": "BaseBdev3", 00:31:33.315 "uuid": "fc7e13b8-aae6-481c-b718-90a681174cdf", 00:31:33.315 "is_configured": true, 00:31:33.315 "data_offset": 2048, 00:31:33.315 "data_size": 63488 00:31:33.315 } 00:31:33.315 ] 00:31:33.315 }' 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:33.315 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.588 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:33.588 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:33.588 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.588 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.588 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.588 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:31:33.588 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:33.588 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.588 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.845 [2024-10-28 13:42:47.754860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:33.846 BaseBdev1 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.846 [ 00:31:33.846 { 00:31:33.846 "name": "BaseBdev1", 00:31:33.846 "aliases": [ 00:31:33.846 "92639ee4-208a-4b16-9eca-c44fb4fa17f4" 00:31:33.846 ], 00:31:33.846 "product_name": "Malloc disk", 00:31:33.846 "block_size": 512, 00:31:33.846 "num_blocks": 65536, 00:31:33.846 "uuid": "92639ee4-208a-4b16-9eca-c44fb4fa17f4", 00:31:33.846 "assigned_rate_limits": { 00:31:33.846 "rw_ios_per_sec": 0, 00:31:33.846 "rw_mbytes_per_sec": 0, 00:31:33.846 "r_mbytes_per_sec": 0, 00:31:33.846 "w_mbytes_per_sec": 0 00:31:33.846 }, 00:31:33.846 "claimed": true, 00:31:33.846 "claim_type": "exclusive_write", 00:31:33.846 "zoned": false, 00:31:33.846 "supported_io_types": { 00:31:33.846 "read": true, 00:31:33.846 "write": true, 00:31:33.846 "unmap": true, 00:31:33.846 "flush": true, 00:31:33.846 "reset": true, 00:31:33.846 "nvme_admin": false, 00:31:33.846 "nvme_io": false, 00:31:33.846 "nvme_io_md": false, 00:31:33.846 "write_zeroes": true, 00:31:33.846 "zcopy": true, 00:31:33.846 "get_zone_info": false, 00:31:33.846 "zone_management": false, 00:31:33.846 "zone_append": false, 00:31:33.846 "compare": false, 00:31:33.846 "compare_and_write": false, 00:31:33.846 "abort": true, 00:31:33.846 "seek_hole": false, 00:31:33.846 "seek_data": false, 00:31:33.846 "copy": true, 00:31:33.846 "nvme_iov_md": false 00:31:33.846 }, 00:31:33.846 "memory_domains": [ 00:31:33.846 { 00:31:33.846 "dma_device_id": "system", 00:31:33.846 "dma_device_type": 1 00:31:33.846 }, 00:31:33.846 { 00:31:33.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:33.846 "dma_device_type": 2 00:31:33.846 } 00:31:33.846 ], 00:31:33.846 "driver_specific": {} 00:31:33.846 } 00:31:33.846 ] 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:33.846 "name": "Existed_Raid", 00:31:33.846 "uuid": "ff246661-07a8-40c1-9491-7312f78f7d5e", 00:31:33.846 "strip_size_kb": 64, 00:31:33.846 "state": "configuring", 00:31:33.846 "raid_level": "raid5f", 00:31:33.846 "superblock": true, 00:31:33.846 "num_base_bdevs": 3, 00:31:33.846 "num_base_bdevs_discovered": 2, 00:31:33.846 "num_base_bdevs_operational": 3, 00:31:33.846 "base_bdevs_list": [ 00:31:33.846 { 00:31:33.846 "name": "BaseBdev1", 00:31:33.846 "uuid": "92639ee4-208a-4b16-9eca-c44fb4fa17f4", 00:31:33.846 "is_configured": true, 00:31:33.846 "data_offset": 2048, 00:31:33.846 "data_size": 63488 00:31:33.846 }, 00:31:33.846 { 00:31:33.846 "name": null, 00:31:33.846 "uuid": "f1644640-c8cf-49d9-9560-18188fd5b5ed", 00:31:33.846 "is_configured": false, 00:31:33.846 "data_offset": 0, 00:31:33.846 "data_size": 63488 00:31:33.846 }, 00:31:33.846 { 00:31:33.846 "name": "BaseBdev3", 00:31:33.846 "uuid": "fc7e13b8-aae6-481c-b718-90a681174cdf", 00:31:33.846 "is_configured": true, 00:31:33.846 "data_offset": 2048, 00:31:33.846 "data_size": 63488 00:31:33.846 } 00:31:33.846 ] 00:31:33.846 }' 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:33.846 13:42:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.414 [2024-10-28 13:42:48.351198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:34.414 "name": "Existed_Raid", 00:31:34.414 "uuid": "ff246661-07a8-40c1-9491-7312f78f7d5e", 00:31:34.414 "strip_size_kb": 64, 00:31:34.414 "state": "configuring", 00:31:34.414 "raid_level": "raid5f", 00:31:34.414 "superblock": true, 00:31:34.414 "num_base_bdevs": 3, 00:31:34.414 "num_base_bdevs_discovered": 1, 00:31:34.414 "num_base_bdevs_operational": 3, 00:31:34.414 "base_bdevs_list": [ 00:31:34.414 { 00:31:34.414 "name": "BaseBdev1", 00:31:34.414 "uuid": "92639ee4-208a-4b16-9eca-c44fb4fa17f4", 00:31:34.414 "is_configured": true, 00:31:34.414 "data_offset": 2048, 00:31:34.414 "data_size": 63488 00:31:34.414 }, 00:31:34.414 { 00:31:34.414 "name": null, 00:31:34.414 "uuid": "f1644640-c8cf-49d9-9560-18188fd5b5ed", 00:31:34.414 "is_configured": false, 00:31:34.414 "data_offset": 0, 00:31:34.414 "data_size": 63488 00:31:34.414 }, 00:31:34.414 { 00:31:34.414 "name": null, 00:31:34.414 "uuid": "fc7e13b8-aae6-481c-b718-90a681174cdf", 00:31:34.414 "is_configured": false, 00:31:34.414 "data_offset": 0, 00:31:34.414 "data_size": 63488 00:31:34.414 } 00:31:34.414 ] 00:31:34.414 }' 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:34.414 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.981 [2024-10-28 13:42:48.919414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:34.981 "name": "Existed_Raid", 00:31:34.981 "uuid": "ff246661-07a8-40c1-9491-7312f78f7d5e", 00:31:34.981 "strip_size_kb": 64, 00:31:34.981 "state": "configuring", 00:31:34.981 "raid_level": "raid5f", 00:31:34.981 "superblock": true, 00:31:34.981 "num_base_bdevs": 3, 00:31:34.981 "num_base_bdevs_discovered": 2, 00:31:34.981 "num_base_bdevs_operational": 3, 00:31:34.981 "base_bdevs_list": [ 00:31:34.981 { 00:31:34.981 "name": "BaseBdev1", 00:31:34.981 "uuid": "92639ee4-208a-4b16-9eca-c44fb4fa17f4", 00:31:34.981 "is_configured": true, 00:31:34.981 "data_offset": 2048, 00:31:34.981 "data_size": 63488 00:31:34.981 }, 00:31:34.981 { 00:31:34.981 "name": null, 00:31:34.981 "uuid": "f1644640-c8cf-49d9-9560-18188fd5b5ed", 00:31:34.981 "is_configured": false, 00:31:34.981 "data_offset": 0, 00:31:34.981 "data_size": 63488 00:31:34.981 }, 00:31:34.981 { 00:31:34.981 "name": "BaseBdev3", 00:31:34.981 "uuid": "fc7e13b8-aae6-481c-b718-90a681174cdf", 00:31:34.981 "is_configured": true, 00:31:34.981 "data_offset": 2048, 00:31:34.981 "data_size": 63488 00:31:34.981 } 00:31:34.981 ] 00:31:34.981 }' 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:34.981 13:42:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.550 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:35.550 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:35.550 13:42:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.550 13:42:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.550 13:42:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.550 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:31:35.550 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:35.550 13:42:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.550 13:42:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.550 [2024-10-28 13:42:49.515585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:35.550 13:42:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.550 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:35.550 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:35.551 "name": "Existed_Raid", 00:31:35.551 "uuid": "ff246661-07a8-40c1-9491-7312f78f7d5e", 00:31:35.551 "strip_size_kb": 64, 00:31:35.551 "state": "configuring", 00:31:35.551 "raid_level": "raid5f", 00:31:35.551 "superblock": true, 00:31:35.551 "num_base_bdevs": 3, 00:31:35.551 "num_base_bdevs_discovered": 1, 00:31:35.551 "num_base_bdevs_operational": 3, 00:31:35.551 "base_bdevs_list": [ 00:31:35.551 { 00:31:35.551 "name": null, 00:31:35.551 "uuid": "92639ee4-208a-4b16-9eca-c44fb4fa17f4", 00:31:35.551 "is_configured": false, 00:31:35.551 "data_offset": 0, 00:31:35.551 "data_size": 63488 00:31:35.551 }, 00:31:35.551 { 00:31:35.551 "name": null, 00:31:35.551 "uuid": "f1644640-c8cf-49d9-9560-18188fd5b5ed", 00:31:35.551 "is_configured": false, 00:31:35.551 "data_offset": 0, 00:31:35.551 "data_size": 63488 00:31:35.551 }, 00:31:35.551 { 00:31:35.551 "name": "BaseBdev3", 00:31:35.551 "uuid": "fc7e13b8-aae6-481c-b718-90a681174cdf", 00:31:35.551 "is_configured": true, 00:31:35.551 "data_offset": 2048, 00:31:35.551 "data_size": 63488 00:31:35.551 } 00:31:35.551 ] 00:31:35.551 }' 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:35.551 13:42:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.117 [2024-10-28 13:42:50.101907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:36.117 "name": "Existed_Raid", 00:31:36.117 "uuid": "ff246661-07a8-40c1-9491-7312f78f7d5e", 00:31:36.117 "strip_size_kb": 64, 00:31:36.117 "state": "configuring", 00:31:36.117 "raid_level": "raid5f", 00:31:36.117 "superblock": true, 00:31:36.117 "num_base_bdevs": 3, 00:31:36.117 "num_base_bdevs_discovered": 2, 00:31:36.117 "num_base_bdevs_operational": 3, 00:31:36.117 "base_bdevs_list": [ 00:31:36.117 { 00:31:36.117 "name": null, 00:31:36.117 "uuid": "92639ee4-208a-4b16-9eca-c44fb4fa17f4", 00:31:36.117 "is_configured": false, 00:31:36.117 "data_offset": 0, 00:31:36.117 "data_size": 63488 00:31:36.117 }, 00:31:36.117 { 00:31:36.117 "name": "BaseBdev2", 00:31:36.117 "uuid": "f1644640-c8cf-49d9-9560-18188fd5b5ed", 00:31:36.117 "is_configured": true, 00:31:36.117 "data_offset": 2048, 00:31:36.117 "data_size": 63488 00:31:36.117 }, 00:31:36.117 { 00:31:36.117 "name": "BaseBdev3", 00:31:36.117 "uuid": "fc7e13b8-aae6-481c-b718-90a681174cdf", 00:31:36.117 "is_configured": true, 00:31:36.117 "data_offset": 2048, 00:31:36.117 "data_size": 63488 00:31:36.117 } 00:31:36.117 ] 00:31:36.117 }' 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:36.117 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 92639ee4-208a-4b16-9eca-c44fb4fa17f4 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.684 [2024-10-28 13:42:50.700001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:36.684 NewBaseBdev 00:31:36.684 [2024-10-28 13:42:50.700450] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:36.684 [2024-10-28 13:42:50.700475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:36.684 [2024-10-28 13:42:50.700805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:31:36.684 [2024-10-28 13:42:50.701383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:36.684 [2024-10-28 13:42:50.701408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.684 [2024-10-28 13:42:50.701533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.684 [ 00:31:36.684 { 00:31:36.684 "name": "NewBaseBdev", 00:31:36.684 "aliases": [ 00:31:36.684 "92639ee4-208a-4b16-9eca-c44fb4fa17f4" 00:31:36.684 ], 00:31:36.684 "product_name": "Malloc disk", 00:31:36.684 "block_size": 512, 00:31:36.684 "num_blocks": 65536, 00:31:36.684 "uuid": "92639ee4-208a-4b16-9eca-c44fb4fa17f4", 00:31:36.684 "assigned_rate_limits": { 00:31:36.684 "rw_ios_per_sec": 0, 00:31:36.684 "rw_mbytes_per_sec": 0, 00:31:36.684 "r_mbytes_per_sec": 0, 00:31:36.684 "w_mbytes_per_sec": 0 00:31:36.684 }, 00:31:36.684 "claimed": true, 00:31:36.684 "claim_type": "exclusive_write", 00:31:36.684 "zoned": false, 00:31:36.684 "supported_io_types": { 00:31:36.684 "read": true, 00:31:36.684 "write": true, 00:31:36.684 "unmap": true, 00:31:36.684 "flush": true, 00:31:36.684 "reset": true, 00:31:36.684 "nvme_admin": false, 00:31:36.684 "nvme_io": false, 00:31:36.684 "nvme_io_md": false, 00:31:36.684 "write_zeroes": true, 00:31:36.684 "zcopy": true, 00:31:36.684 "get_zone_info": false, 00:31:36.684 "zone_management": false, 00:31:36.684 "zone_append": false, 00:31:36.684 "compare": false, 00:31:36.684 "compare_and_write": false, 00:31:36.684 "abort": true, 00:31:36.684 "seek_hole": false, 00:31:36.684 "seek_data": false, 00:31:36.684 "copy": true, 00:31:36.684 "nvme_iov_md": false 00:31:36.684 }, 00:31:36.684 "memory_domains": [ 00:31:36.684 { 00:31:36.684 "dma_device_id": "system", 00:31:36.684 "dma_device_type": 1 00:31:36.684 }, 00:31:36.684 { 00:31:36.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.684 "dma_device_type": 2 00:31:36.684 } 00:31:36.684 ], 00:31:36.684 "driver_specific": {} 00:31:36.684 } 00:31:36.684 ] 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:36.684 "name": "Existed_Raid", 00:31:36.684 "uuid": "ff246661-07a8-40c1-9491-7312f78f7d5e", 00:31:36.684 "strip_size_kb": 64, 00:31:36.684 "state": "online", 00:31:36.684 "raid_level": "raid5f", 00:31:36.684 "superblock": true, 00:31:36.684 "num_base_bdevs": 3, 00:31:36.684 "num_base_bdevs_discovered": 3, 00:31:36.684 "num_base_bdevs_operational": 3, 00:31:36.684 "base_bdevs_list": [ 00:31:36.684 { 00:31:36.684 "name": "NewBaseBdev", 00:31:36.684 "uuid": "92639ee4-208a-4b16-9eca-c44fb4fa17f4", 00:31:36.684 "is_configured": true, 00:31:36.684 "data_offset": 2048, 00:31:36.684 "data_size": 63488 00:31:36.684 }, 00:31:36.684 { 00:31:36.684 "name": "BaseBdev2", 00:31:36.684 "uuid": "f1644640-c8cf-49d9-9560-18188fd5b5ed", 00:31:36.684 "is_configured": true, 00:31:36.684 "data_offset": 2048, 00:31:36.684 "data_size": 63488 00:31:36.684 }, 00:31:36.684 { 00:31:36.684 "name": "BaseBdev3", 00:31:36.684 "uuid": "fc7e13b8-aae6-481c-b718-90a681174cdf", 00:31:36.684 "is_configured": true, 00:31:36.684 "data_offset": 2048, 00:31:36.684 "data_size": 63488 00:31:36.684 } 00:31:36.684 ] 00:31:36.684 }' 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:36.684 13:42:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.247 [2024-10-28 13:42:51.220502] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:37.247 "name": "Existed_Raid", 00:31:37.247 "aliases": [ 00:31:37.247 "ff246661-07a8-40c1-9491-7312f78f7d5e" 00:31:37.247 ], 00:31:37.247 "product_name": "Raid Volume", 00:31:37.247 "block_size": 512, 00:31:37.247 "num_blocks": 126976, 00:31:37.247 "uuid": "ff246661-07a8-40c1-9491-7312f78f7d5e", 00:31:37.247 "assigned_rate_limits": { 00:31:37.247 "rw_ios_per_sec": 0, 00:31:37.247 "rw_mbytes_per_sec": 0, 00:31:37.247 "r_mbytes_per_sec": 0, 00:31:37.247 "w_mbytes_per_sec": 0 00:31:37.247 }, 00:31:37.247 "claimed": false, 00:31:37.247 "zoned": false, 00:31:37.247 "supported_io_types": { 00:31:37.247 "read": true, 00:31:37.247 "write": true, 00:31:37.247 "unmap": false, 00:31:37.247 "flush": false, 00:31:37.247 "reset": true, 00:31:37.247 "nvme_admin": false, 00:31:37.247 "nvme_io": false, 00:31:37.247 "nvme_io_md": false, 00:31:37.247 "write_zeroes": true, 00:31:37.247 "zcopy": false, 00:31:37.247 "get_zone_info": false, 00:31:37.247 "zone_management": false, 00:31:37.247 "zone_append": false, 00:31:37.247 "compare": false, 00:31:37.247 "compare_and_write": false, 00:31:37.247 "abort": false, 00:31:37.247 "seek_hole": false, 00:31:37.247 "seek_data": false, 00:31:37.247 "copy": false, 00:31:37.247 "nvme_iov_md": false 00:31:37.247 }, 00:31:37.247 "driver_specific": { 00:31:37.247 "raid": { 00:31:37.247 "uuid": "ff246661-07a8-40c1-9491-7312f78f7d5e", 00:31:37.247 "strip_size_kb": 64, 00:31:37.247 "state": "online", 00:31:37.247 "raid_level": "raid5f", 00:31:37.247 "superblock": true, 00:31:37.247 "num_base_bdevs": 3, 00:31:37.247 "num_base_bdevs_discovered": 3, 00:31:37.247 "num_base_bdevs_operational": 3, 00:31:37.247 "base_bdevs_list": [ 00:31:37.247 { 00:31:37.247 "name": "NewBaseBdev", 00:31:37.247 "uuid": "92639ee4-208a-4b16-9eca-c44fb4fa17f4", 00:31:37.247 "is_configured": true, 00:31:37.247 "data_offset": 2048, 00:31:37.247 "data_size": 63488 00:31:37.247 }, 00:31:37.247 { 00:31:37.247 "name": "BaseBdev2", 00:31:37.247 "uuid": "f1644640-c8cf-49d9-9560-18188fd5b5ed", 00:31:37.247 "is_configured": true, 00:31:37.247 "data_offset": 2048, 00:31:37.247 "data_size": 63488 00:31:37.247 }, 00:31:37.247 { 00:31:37.247 "name": "BaseBdev3", 00:31:37.247 "uuid": "fc7e13b8-aae6-481c-b718-90a681174cdf", 00:31:37.247 "is_configured": true, 00:31:37.247 "data_offset": 2048, 00:31:37.247 "data_size": 63488 00:31:37.247 } 00:31:37.247 ] 00:31:37.247 } 00:31:37.247 } 00:31:37.247 }' 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:31:37.247 BaseBdev2 00:31:37.247 BaseBdev3' 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.247 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.506 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:37.506 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:37.506 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:37.506 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:37.506 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.506 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.507 [2024-10-28 13:42:51.532311] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:37.507 [2024-10-28 13:42:51.532491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:37.507 [2024-10-28 13:42:51.532612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:37.507 [2024-10-28 13:42:51.532968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:37.507 [2024-10-28 13:42:51.532987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93273 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 93273 ']' 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 93273 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93273 00:31:37.507 killing process with pid 93273 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93273' 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 93273 00:31:37.507 [2024-10-28 13:42:51.570109] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:37.507 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 93273 00:31:37.507 [2024-10-28 13:42:51.602038] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:37.764 13:42:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:31:37.764 ************************************ 00:31:37.764 END TEST raid5f_state_function_test_sb 00:31:37.764 ************************************ 00:31:37.764 00:31:37.764 real 0m10.206s 00:31:37.764 user 0m17.976s 00:31:37.764 sys 0m1.586s 00:31:37.764 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:37.764 13:42:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.764 13:42:51 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:31:37.764 13:42:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:37.764 13:42:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:37.764 13:42:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:37.764 ************************************ 00:31:37.764 START TEST raid5f_superblock_test 00:31:37.764 ************************************ 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=93888 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 93888 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 93888 ']' 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:37.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.764 13:42:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.765 13:42:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:37.765 13:42:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:38.022 [2024-10-28 13:42:52.000798] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:31:38.022 [2024-10-28 13:42:52.001009] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93888 ] 00:31:38.022 [2024-10-28 13:42:52.154100] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:38.317 [2024-10-28 13:42:52.184811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.317 [2024-10-28 13:42:52.232879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.317 [2024-10-28 13:42:52.292696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:38.317 [2024-10-28 13:42:52.292738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:38.884 13:42:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:38.884 13:42:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:31:38.884 13:42:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:38.884 malloc1 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:38.884 [2024-10-28 13:42:53.028399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:38.884 [2024-10-28 13:42:53.028656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:38.884 [2024-10-28 13:42:53.028735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:38.884 [2024-10-28 13:42:53.028864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:38.884 [2024-10-28 13:42:53.031954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:38.884 [2024-10-28 13:42:53.032134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:38.884 pt1 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.884 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.142 malloc2 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.142 [2024-10-28 13:42:53.060683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:39.142 [2024-10-28 13:42:53.060748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:39.142 [2024-10-28 13:42:53.060776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:39.142 [2024-10-28 13:42:53.060791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:39.142 [2024-10-28 13:42:53.063697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:39.142 [2024-10-28 13:42:53.063742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:39.142 pt2 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.142 malloc3 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.142 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.143 [2024-10-28 13:42:53.089026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:39.143 [2024-10-28 13:42:53.089241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:39.143 [2024-10-28 13:42:53.089318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:39.143 [2024-10-28 13:42:53.089532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:39.143 [2024-10-28 13:42:53.092539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:39.143 [2024-10-28 13:42:53.092709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:39.143 pt3 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.143 [2024-10-28 13:42:53.097338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:39.143 [2024-10-28 13:42:53.099993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:39.143 [2024-10-28 13:42:53.100080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:39.143 [2024-10-28 13:42:53.100331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:31:39.143 [2024-10-28 13:42:53.100371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:39.143 [2024-10-28 13:42:53.100693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:31:39.143 [2024-10-28 13:42:53.101453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:31:39.143 [2024-10-28 13:42:53.101511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:31:39.143 [2024-10-28 13:42:53.101837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:39.143 "name": "raid_bdev1", 00:31:39.143 "uuid": "93c0ce32-76c2-4baa-811c-66b9dd227cca", 00:31:39.143 "strip_size_kb": 64, 00:31:39.143 "state": "online", 00:31:39.143 "raid_level": "raid5f", 00:31:39.143 "superblock": true, 00:31:39.143 "num_base_bdevs": 3, 00:31:39.143 "num_base_bdevs_discovered": 3, 00:31:39.143 "num_base_bdevs_operational": 3, 00:31:39.143 "base_bdevs_list": [ 00:31:39.143 { 00:31:39.143 "name": "pt1", 00:31:39.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:39.143 "is_configured": true, 00:31:39.143 "data_offset": 2048, 00:31:39.143 "data_size": 63488 00:31:39.143 }, 00:31:39.143 { 00:31:39.143 "name": "pt2", 00:31:39.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:39.143 "is_configured": true, 00:31:39.143 "data_offset": 2048, 00:31:39.143 "data_size": 63488 00:31:39.143 }, 00:31:39.143 { 00:31:39.143 "name": "pt3", 00:31:39.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:39.143 "is_configured": true, 00:31:39.143 "data_offset": 2048, 00:31:39.143 "data_size": 63488 00:31:39.143 } 00:31:39.143 ] 00:31:39.143 }' 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:39.143 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.709 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:31:39.709 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:39.709 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:39.709 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:39.709 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:39.709 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:39.709 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:39.709 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.709 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:39.709 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.709 [2024-10-28 13:42:53.630330] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:39.709 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.709 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:39.709 "name": "raid_bdev1", 00:31:39.709 "aliases": [ 00:31:39.709 "93c0ce32-76c2-4baa-811c-66b9dd227cca" 00:31:39.709 ], 00:31:39.709 "product_name": "Raid Volume", 00:31:39.709 "block_size": 512, 00:31:39.709 "num_blocks": 126976, 00:31:39.709 "uuid": "93c0ce32-76c2-4baa-811c-66b9dd227cca", 00:31:39.709 "assigned_rate_limits": { 00:31:39.709 "rw_ios_per_sec": 0, 00:31:39.709 "rw_mbytes_per_sec": 0, 00:31:39.709 "r_mbytes_per_sec": 0, 00:31:39.709 "w_mbytes_per_sec": 0 00:31:39.709 }, 00:31:39.709 "claimed": false, 00:31:39.709 "zoned": false, 00:31:39.709 "supported_io_types": { 00:31:39.709 "read": true, 00:31:39.709 "write": true, 00:31:39.709 "unmap": false, 00:31:39.709 "flush": false, 00:31:39.709 "reset": true, 00:31:39.709 "nvme_admin": false, 00:31:39.709 "nvme_io": false, 00:31:39.709 "nvme_io_md": false, 00:31:39.709 "write_zeroes": true, 00:31:39.709 "zcopy": false, 00:31:39.709 "get_zone_info": false, 00:31:39.709 "zone_management": false, 00:31:39.709 "zone_append": false, 00:31:39.709 "compare": false, 00:31:39.709 "compare_and_write": false, 00:31:39.710 "abort": false, 00:31:39.710 "seek_hole": false, 00:31:39.710 "seek_data": false, 00:31:39.710 "copy": false, 00:31:39.710 "nvme_iov_md": false 00:31:39.710 }, 00:31:39.710 "driver_specific": { 00:31:39.710 "raid": { 00:31:39.710 "uuid": "93c0ce32-76c2-4baa-811c-66b9dd227cca", 00:31:39.710 "strip_size_kb": 64, 00:31:39.710 "state": "online", 00:31:39.710 "raid_level": "raid5f", 00:31:39.710 "superblock": true, 00:31:39.710 "num_base_bdevs": 3, 00:31:39.710 "num_base_bdevs_discovered": 3, 00:31:39.710 "num_base_bdevs_operational": 3, 00:31:39.710 "base_bdevs_list": [ 00:31:39.710 { 00:31:39.710 "name": "pt1", 00:31:39.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:39.710 "is_configured": true, 00:31:39.710 "data_offset": 2048, 00:31:39.710 "data_size": 63488 00:31:39.710 }, 00:31:39.710 { 00:31:39.710 "name": "pt2", 00:31:39.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:39.710 "is_configured": true, 00:31:39.710 "data_offset": 2048, 00:31:39.710 "data_size": 63488 00:31:39.710 }, 00:31:39.710 { 00:31:39.710 "name": "pt3", 00:31:39.710 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:39.710 "is_configured": true, 00:31:39.710 "data_offset": 2048, 00:31:39.710 "data_size": 63488 00:31:39.710 } 00:31:39.710 ] 00:31:39.710 } 00:31:39.710 } 00:31:39.710 }' 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:39.710 pt2 00:31:39.710 pt3' 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.710 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.969 [2024-10-28 13:42:53.946322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.969 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=93c0ce32-76c2-4baa-811c-66b9dd227cca 00:31:39.970 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 93c0ce32-76c2-4baa-811c-66b9dd227cca ']' 00:31:39.970 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:39.970 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.970 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.970 [2024-10-28 13:42:53.994035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:39.970 [2024-10-28 13:42:53.994206] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:39.970 [2024-10-28 13:42:53.994392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:39.970 [2024-10-28 13:42:53.994634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:39.970 [2024-10-28 13:42:53.994771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:31:39.970 13:42:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.970 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.970 13:42:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:39.970 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.229 [2024-10-28 13:42:54.162226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:40.229 [2024-10-28 13:42:54.165049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:40.229 [2024-10-28 13:42:54.165116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:31:40.229 [2024-10-28 13:42:54.165215] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:40.229 [2024-10-28 13:42:54.165340] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:40.229 [2024-10-28 13:42:54.165373] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:31:40.229 [2024-10-28 13:42:54.165398] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:40.229 [2024-10-28 13:42:54.165412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:31:40.229 request: 00:31:40.229 { 00:31:40.229 "name": "raid_bdev1", 00:31:40.229 "raid_level": "raid5f", 00:31:40.229 "base_bdevs": [ 00:31:40.229 "malloc1", 00:31:40.229 "malloc2", 00:31:40.229 "malloc3" 00:31:40.229 ], 00:31:40.229 "strip_size_kb": 64, 00:31:40.229 "superblock": false, 00:31:40.229 "method": "bdev_raid_create", 00:31:40.229 "req_id": 1 00:31:40.229 } 00:31:40.229 Got JSON-RPC error response 00:31:40.229 response: 00:31:40.229 { 00:31:40.229 "code": -17, 00:31:40.229 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:40.229 } 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:40.229 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.230 [2024-10-28 13:42:54.222262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:40.230 [2024-10-28 13:42:54.222446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:40.230 [2024-10-28 13:42:54.222518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:40.230 [2024-10-28 13:42:54.222741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:40.230 [2024-10-28 13:42:54.225961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:40.230 [2024-10-28 13:42:54.226134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:40.230 [2024-10-28 13:42:54.226362] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:40.230 [2024-10-28 13:42:54.226527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:40.230 pt1 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:40.230 "name": "raid_bdev1", 00:31:40.230 "uuid": "93c0ce32-76c2-4baa-811c-66b9dd227cca", 00:31:40.230 "strip_size_kb": 64, 00:31:40.230 "state": "configuring", 00:31:40.230 "raid_level": "raid5f", 00:31:40.230 "superblock": true, 00:31:40.230 "num_base_bdevs": 3, 00:31:40.230 "num_base_bdevs_discovered": 1, 00:31:40.230 "num_base_bdevs_operational": 3, 00:31:40.230 "base_bdevs_list": [ 00:31:40.230 { 00:31:40.230 "name": "pt1", 00:31:40.230 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:40.230 "is_configured": true, 00:31:40.230 "data_offset": 2048, 00:31:40.230 "data_size": 63488 00:31:40.230 }, 00:31:40.230 { 00:31:40.230 "name": null, 00:31:40.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:40.230 "is_configured": false, 00:31:40.230 "data_offset": 2048, 00:31:40.230 "data_size": 63488 00:31:40.230 }, 00:31:40.230 { 00:31:40.230 "name": null, 00:31:40.230 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:40.230 "is_configured": false, 00:31:40.230 "data_offset": 2048, 00:31:40.230 "data_size": 63488 00:31:40.230 } 00:31:40.230 ] 00:31:40.230 }' 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:40.230 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.797 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:31:40.797 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:40.797 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.797 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.797 [2024-10-28 13:42:54.738547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:40.797 [2024-10-28 13:42:54.738802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:40.797 [2024-10-28 13:42:54.738885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:31:40.797 [2024-10-28 13:42:54.738907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:40.797 [2024-10-28 13:42:54.739537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:40.797 [2024-10-28 13:42:54.739569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:40.797 [2024-10-28 13:42:54.739673] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:40.797 [2024-10-28 13:42:54.739704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:40.797 pt2 00:31:40.797 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.797 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:31:40.797 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.797 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.797 [2024-10-28 13:42:54.746581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:31:40.797 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.797 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:40.798 "name": "raid_bdev1", 00:31:40.798 "uuid": "93c0ce32-76c2-4baa-811c-66b9dd227cca", 00:31:40.798 "strip_size_kb": 64, 00:31:40.798 "state": "configuring", 00:31:40.798 "raid_level": "raid5f", 00:31:40.798 "superblock": true, 00:31:40.798 "num_base_bdevs": 3, 00:31:40.798 "num_base_bdevs_discovered": 1, 00:31:40.798 "num_base_bdevs_operational": 3, 00:31:40.798 "base_bdevs_list": [ 00:31:40.798 { 00:31:40.798 "name": "pt1", 00:31:40.798 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:40.798 "is_configured": true, 00:31:40.798 "data_offset": 2048, 00:31:40.798 "data_size": 63488 00:31:40.798 }, 00:31:40.798 { 00:31:40.798 "name": null, 00:31:40.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:40.798 "is_configured": false, 00:31:40.798 "data_offset": 0, 00:31:40.798 "data_size": 63488 00:31:40.798 }, 00:31:40.798 { 00:31:40.798 "name": null, 00:31:40.798 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:40.798 "is_configured": false, 00:31:40.798 "data_offset": 2048, 00:31:40.798 "data_size": 63488 00:31:40.798 } 00:31:40.798 ] 00:31:40.798 }' 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:40.798 13:42:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.364 [2024-10-28 13:42:55.286772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:41.364 [2024-10-28 13:42:55.286980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:41.364 [2024-10-28 13:42:55.287051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:31:41.364 [2024-10-28 13:42:55.287191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:41.364 [2024-10-28 13:42:55.287793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:41.364 [2024-10-28 13:42:55.287850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:41.364 [2024-10-28 13:42:55.287953] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:41.364 [2024-10-28 13:42:55.288010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:41.364 pt2 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.364 [2024-10-28 13:42:55.298717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:41.364 [2024-10-28 13:42:55.298945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:41.364 [2024-10-28 13:42:55.299009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:41.364 [2024-10-28 13:42:55.299128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:41.364 [2024-10-28 13:42:55.299612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:41.364 [2024-10-28 13:42:55.299780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:41.364 [2024-10-28 13:42:55.299978] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:41.364 [2024-10-28 13:42:55.300118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:41.364 [2024-10-28 13:42:55.300426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:31:41.364 [2024-10-28 13:42:55.300457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:41.364 [2024-10-28 13:42:55.300802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:31:41.364 [2024-10-28 13:42:55.301362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:31:41.364 [2024-10-28 13:42:55.301380] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:31:41.364 [2024-10-28 13:42:55.301510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:41.364 pt3 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.364 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.365 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:41.365 "name": "raid_bdev1", 00:31:41.365 "uuid": "93c0ce32-76c2-4baa-811c-66b9dd227cca", 00:31:41.365 "strip_size_kb": 64, 00:31:41.365 "state": "online", 00:31:41.365 "raid_level": "raid5f", 00:31:41.365 "superblock": true, 00:31:41.365 "num_base_bdevs": 3, 00:31:41.365 "num_base_bdevs_discovered": 3, 00:31:41.365 "num_base_bdevs_operational": 3, 00:31:41.365 "base_bdevs_list": [ 00:31:41.365 { 00:31:41.365 "name": "pt1", 00:31:41.365 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:41.365 "is_configured": true, 00:31:41.365 "data_offset": 2048, 00:31:41.365 "data_size": 63488 00:31:41.365 }, 00:31:41.365 { 00:31:41.365 "name": "pt2", 00:31:41.365 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:41.365 "is_configured": true, 00:31:41.365 "data_offset": 2048, 00:31:41.365 "data_size": 63488 00:31:41.365 }, 00:31:41.365 { 00:31:41.365 "name": "pt3", 00:31:41.365 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:41.365 "is_configured": true, 00:31:41.365 "data_offset": 2048, 00:31:41.365 "data_size": 63488 00:31:41.365 } 00:31:41.365 ] 00:31:41.365 }' 00:31:41.365 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:41.365 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:41.931 [2024-10-28 13:42:55.831623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:41.931 "name": "raid_bdev1", 00:31:41.931 "aliases": [ 00:31:41.931 "93c0ce32-76c2-4baa-811c-66b9dd227cca" 00:31:41.931 ], 00:31:41.931 "product_name": "Raid Volume", 00:31:41.931 "block_size": 512, 00:31:41.931 "num_blocks": 126976, 00:31:41.931 "uuid": "93c0ce32-76c2-4baa-811c-66b9dd227cca", 00:31:41.931 "assigned_rate_limits": { 00:31:41.931 "rw_ios_per_sec": 0, 00:31:41.931 "rw_mbytes_per_sec": 0, 00:31:41.931 "r_mbytes_per_sec": 0, 00:31:41.931 "w_mbytes_per_sec": 0 00:31:41.931 }, 00:31:41.931 "claimed": false, 00:31:41.931 "zoned": false, 00:31:41.931 "supported_io_types": { 00:31:41.931 "read": true, 00:31:41.931 "write": true, 00:31:41.931 "unmap": false, 00:31:41.931 "flush": false, 00:31:41.931 "reset": true, 00:31:41.931 "nvme_admin": false, 00:31:41.931 "nvme_io": false, 00:31:41.931 "nvme_io_md": false, 00:31:41.931 "write_zeroes": true, 00:31:41.931 "zcopy": false, 00:31:41.931 "get_zone_info": false, 00:31:41.931 "zone_management": false, 00:31:41.931 "zone_append": false, 00:31:41.931 "compare": false, 00:31:41.931 "compare_and_write": false, 00:31:41.931 "abort": false, 00:31:41.931 "seek_hole": false, 00:31:41.931 "seek_data": false, 00:31:41.931 "copy": false, 00:31:41.931 "nvme_iov_md": false 00:31:41.931 }, 00:31:41.931 "driver_specific": { 00:31:41.931 "raid": { 00:31:41.931 "uuid": "93c0ce32-76c2-4baa-811c-66b9dd227cca", 00:31:41.931 "strip_size_kb": 64, 00:31:41.931 "state": "online", 00:31:41.931 "raid_level": "raid5f", 00:31:41.931 "superblock": true, 00:31:41.931 "num_base_bdevs": 3, 00:31:41.931 "num_base_bdevs_discovered": 3, 00:31:41.931 "num_base_bdevs_operational": 3, 00:31:41.931 "base_bdevs_list": [ 00:31:41.931 { 00:31:41.931 "name": "pt1", 00:31:41.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:41.931 "is_configured": true, 00:31:41.931 "data_offset": 2048, 00:31:41.931 "data_size": 63488 00:31:41.931 }, 00:31:41.931 { 00:31:41.931 "name": "pt2", 00:31:41.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:41.931 "is_configured": true, 00:31:41.931 "data_offset": 2048, 00:31:41.931 "data_size": 63488 00:31:41.931 }, 00:31:41.931 { 00:31:41.931 "name": "pt3", 00:31:41.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:41.931 "is_configured": true, 00:31:41.931 "data_offset": 2048, 00:31:41.931 "data_size": 63488 00:31:41.931 } 00:31:41.931 ] 00:31:41.931 } 00:31:41.931 } 00:31:41.931 }' 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:41.931 pt2 00:31:41.931 pt3' 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.931 13:42:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.931 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:41.931 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:41.931 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:41.931 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:41.931 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.931 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.931 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:41.931 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.931 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:41.932 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:41.932 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:42.189 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:31:42.189 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:31:42.190 [2024-10-28 13:42:56.147644] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 93c0ce32-76c2-4baa-811c-66b9dd227cca '!=' 93c0ce32-76c2-4baa-811c-66b9dd227cca ']' 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.190 [2024-10-28 13:42:56.199488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:42.190 "name": "raid_bdev1", 00:31:42.190 "uuid": "93c0ce32-76c2-4baa-811c-66b9dd227cca", 00:31:42.190 "strip_size_kb": 64, 00:31:42.190 "state": "online", 00:31:42.190 "raid_level": "raid5f", 00:31:42.190 "superblock": true, 00:31:42.190 "num_base_bdevs": 3, 00:31:42.190 "num_base_bdevs_discovered": 2, 00:31:42.190 "num_base_bdevs_operational": 2, 00:31:42.190 "base_bdevs_list": [ 00:31:42.190 { 00:31:42.190 "name": null, 00:31:42.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.190 "is_configured": false, 00:31:42.190 "data_offset": 0, 00:31:42.190 "data_size": 63488 00:31:42.190 }, 00:31:42.190 { 00:31:42.190 "name": "pt2", 00:31:42.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:42.190 "is_configured": true, 00:31:42.190 "data_offset": 2048, 00:31:42.190 "data_size": 63488 00:31:42.190 }, 00:31:42.190 { 00:31:42.190 "name": "pt3", 00:31:42.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:42.190 "is_configured": true, 00:31:42.190 "data_offset": 2048, 00:31:42.190 "data_size": 63488 00:31:42.190 } 00:31:42.190 ] 00:31:42.190 }' 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:42.190 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.756 [2024-10-28 13:42:56.723642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:42.756 [2024-10-28 13:42:56.723790] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:42.756 [2024-10-28 13:42:56.723938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:42.756 [2024-10-28 13:42:56.724015] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:42.756 [2024-10-28 13:42:56.724034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:42.756 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.757 [2024-10-28 13:42:56.803618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:42.757 [2024-10-28 13:42:56.803808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:42.757 [2024-10-28 13:42:56.803881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:31:42.757 [2024-10-28 13:42:56.804135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:42.757 [2024-10-28 13:42:56.807192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:42.757 pt2 00:31:42.757 [2024-10-28 13:42:56.807370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:42.757 [2024-10-28 13:42:56.807496] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:42.757 [2024-10-28 13:42:56.807549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:42.757 "name": "raid_bdev1", 00:31:42.757 "uuid": "93c0ce32-76c2-4baa-811c-66b9dd227cca", 00:31:42.757 "strip_size_kb": 64, 00:31:42.757 "state": "configuring", 00:31:42.757 "raid_level": "raid5f", 00:31:42.757 "superblock": true, 00:31:42.757 "num_base_bdevs": 3, 00:31:42.757 "num_base_bdevs_discovered": 1, 00:31:42.757 "num_base_bdevs_operational": 2, 00:31:42.757 "base_bdevs_list": [ 00:31:42.757 { 00:31:42.757 "name": null, 00:31:42.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.757 "is_configured": false, 00:31:42.757 "data_offset": 2048, 00:31:42.757 "data_size": 63488 00:31:42.757 }, 00:31:42.757 { 00:31:42.757 "name": "pt2", 00:31:42.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:42.757 "is_configured": true, 00:31:42.757 "data_offset": 2048, 00:31:42.757 "data_size": 63488 00:31:42.757 }, 00:31:42.757 { 00:31:42.757 "name": null, 00:31:42.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:42.757 "is_configured": false, 00:31:42.757 "data_offset": 2048, 00:31:42.757 "data_size": 63488 00:31:42.757 } 00:31:42.757 ] 00:31:42.757 }' 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:42.757 13:42:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.323 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.324 [2024-10-28 13:42:57.331971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:43.324 [2024-10-28 13:42:57.332224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:43.324 [2024-10-28 13:42:57.332394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:31:43.324 [2024-10-28 13:42:57.332430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:43.324 [2024-10-28 13:42:57.332952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:43.324 [2024-10-28 13:42:57.332995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:43.324 [2024-10-28 13:42:57.333094] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:43.324 [2024-10-28 13:42:57.333155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:43.324 [2024-10-28 13:42:57.333283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:43.324 [2024-10-28 13:42:57.333304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:43.324 [2024-10-28 13:42:57.333606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:31:43.324 [2024-10-28 13:42:57.334258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:43.324 [2024-10-28 13:42:57.334277] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:31:43.324 [2024-10-28 13:42:57.334574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:43.324 pt3 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:43.324 "name": "raid_bdev1", 00:31:43.324 "uuid": "93c0ce32-76c2-4baa-811c-66b9dd227cca", 00:31:43.324 "strip_size_kb": 64, 00:31:43.324 "state": "online", 00:31:43.324 "raid_level": "raid5f", 00:31:43.324 "superblock": true, 00:31:43.324 "num_base_bdevs": 3, 00:31:43.324 "num_base_bdevs_discovered": 2, 00:31:43.324 "num_base_bdevs_operational": 2, 00:31:43.324 "base_bdevs_list": [ 00:31:43.324 { 00:31:43.324 "name": null, 00:31:43.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.324 "is_configured": false, 00:31:43.324 "data_offset": 2048, 00:31:43.324 "data_size": 63488 00:31:43.324 }, 00:31:43.324 { 00:31:43.324 "name": "pt2", 00:31:43.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:43.324 "is_configured": true, 00:31:43.324 "data_offset": 2048, 00:31:43.324 "data_size": 63488 00:31:43.324 }, 00:31:43.324 { 00:31:43.324 "name": "pt3", 00:31:43.324 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:43.324 "is_configured": true, 00:31:43.324 "data_offset": 2048, 00:31:43.324 "data_size": 63488 00:31:43.324 } 00:31:43.324 ] 00:31:43.324 }' 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:43.324 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.889 [2024-10-28 13:42:57.868366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:43.889 [2024-10-28 13:42:57.868588] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:43.889 [2024-10-28 13:42:57.868702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:43.889 [2024-10-28 13:42:57.868800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:43.889 [2024-10-28 13:42:57.868814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.889 [2024-10-28 13:42:57.948379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:43.889 [2024-10-28 13:42:57.948591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:43.889 [2024-10-28 13:42:57.948754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:31:43.889 [2024-10-28 13:42:57.948780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:43.889 [2024-10-28 13:42:57.951781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:43.889 pt1 00:31:43.889 [2024-10-28 13:42:57.952014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:43.889 [2024-10-28 13:42:57.952138] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:43.889 [2024-10-28 13:42:57.952213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:43.889 [2024-10-28 13:42:57.952373] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:43.889 [2024-10-28 13:42:57.952396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:43.889 [2024-10-28 13:42:57.952424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:31:43.889 [2024-10-28 13:42:57.952473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:43.889 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:43.890 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:43.890 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.890 13:42:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.890 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.890 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.890 13:42:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.890 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:43.890 "name": "raid_bdev1", 00:31:43.890 "uuid": "93c0ce32-76c2-4baa-811c-66b9dd227cca", 00:31:43.890 "strip_size_kb": 64, 00:31:43.890 "state": "configuring", 00:31:43.890 "raid_level": "raid5f", 00:31:43.890 "superblock": true, 00:31:43.890 "num_base_bdevs": 3, 00:31:43.890 "num_base_bdevs_discovered": 1, 00:31:43.890 "num_base_bdevs_operational": 2, 00:31:43.890 "base_bdevs_list": [ 00:31:43.890 { 00:31:43.890 "name": null, 00:31:43.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.890 "is_configured": false, 00:31:43.890 "data_offset": 2048, 00:31:43.890 "data_size": 63488 00:31:43.890 }, 00:31:43.890 { 00:31:43.890 "name": "pt2", 00:31:43.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:43.890 "is_configured": true, 00:31:43.890 "data_offset": 2048, 00:31:43.890 "data_size": 63488 00:31:43.890 }, 00:31:43.890 { 00:31:43.890 "name": null, 00:31:43.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:43.890 "is_configured": false, 00:31:43.890 "data_offset": 2048, 00:31:43.890 "data_size": 63488 00:31:43.890 } 00:31:43.890 ] 00:31:43.890 }' 00:31:43.890 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:43.890 13:42:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.456 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:31:44.456 13:42:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.456 13:42:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.457 [2024-10-28 13:42:58.536693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:44.457 [2024-10-28 13:42:58.536950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:44.457 [2024-10-28 13:42:58.537110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:31:44.457 [2024-10-28 13:42:58.537173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:44.457 [2024-10-28 13:42:58.537757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:44.457 [2024-10-28 13:42:58.537807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:44.457 [2024-10-28 13:42:58.537940] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:44.457 [2024-10-28 13:42:58.537969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:44.457 [2024-10-28 13:42:58.538084] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:31:44.457 [2024-10-28 13:42:58.538098] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:44.457 [2024-10-28 13:42:58.538444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:31:44.457 [2024-10-28 13:42:58.539055] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:31:44.457 [2024-10-28 13:42:58.539075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:31:44.457 [2024-10-28 13:42:58.539507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:44.457 pt3 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:44.457 "name": "raid_bdev1", 00:31:44.457 "uuid": "93c0ce32-76c2-4baa-811c-66b9dd227cca", 00:31:44.457 "strip_size_kb": 64, 00:31:44.457 "state": "online", 00:31:44.457 "raid_level": "raid5f", 00:31:44.457 "superblock": true, 00:31:44.457 "num_base_bdevs": 3, 00:31:44.457 "num_base_bdevs_discovered": 2, 00:31:44.457 "num_base_bdevs_operational": 2, 00:31:44.457 "base_bdevs_list": [ 00:31:44.457 { 00:31:44.457 "name": null, 00:31:44.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.457 "is_configured": false, 00:31:44.457 "data_offset": 2048, 00:31:44.457 "data_size": 63488 00:31:44.457 }, 00:31:44.457 { 00:31:44.457 "name": "pt2", 00:31:44.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:44.457 "is_configured": true, 00:31:44.457 "data_offset": 2048, 00:31:44.457 "data_size": 63488 00:31:44.457 }, 00:31:44.457 { 00:31:44.457 "name": "pt3", 00:31:44.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:44.457 "is_configured": true, 00:31:44.457 "data_offset": 2048, 00:31:44.457 "data_size": 63488 00:31:44.457 } 00:31:44.457 ] 00:31:44.457 }' 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:44.457 13:42:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:31:45.023 [2024-10-28 13:42:59.125642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 93c0ce32-76c2-4baa-811c-66b9dd227cca '!=' 93c0ce32-76c2-4baa-811c-66b9dd227cca ']' 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 93888 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 93888 ']' 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 93888 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:31:45.023 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:45.281 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93888 00:31:45.281 killing process with pid 93888 00:31:45.281 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:45.281 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:45.281 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93888' 00:31:45.281 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 93888 00:31:45.281 [2024-10-28 13:42:59.205381] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:45.281 [2024-10-28 13:42:59.205484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:45.281 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 93888 00:31:45.281 [2024-10-28 13:42:59.205592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:45.281 [2024-10-28 13:42:59.205611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:31:45.281 [2024-10-28 13:42:59.240330] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:45.539 13:42:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:31:45.539 00:31:45.539 real 0m7.588s 00:31:45.539 user 0m13.209s 00:31:45.539 sys 0m1.175s 00:31:45.539 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:45.539 ************************************ 00:31:45.539 END TEST raid5f_superblock_test 00:31:45.539 ************************************ 00:31:45.539 13:42:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.539 13:42:59 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:31:45.539 13:42:59 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:31:45.539 13:42:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:31:45.539 13:42:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:45.539 13:42:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:45.539 ************************************ 00:31:45.539 START TEST raid5f_rebuild_test 00:31:45.539 ************************************ 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=94326 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 94326 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 94326 ']' 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:45.539 13:42:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.539 [2024-10-28 13:42:59.651006] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:31:45.539 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:45.539 Zero copy mechanism will not be used. 00:31:45.539 [2024-10-28 13:42:59.651816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94326 ] 00:31:45.797 [2024-10-28 13:42:59.811256] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:45.797 [2024-10-28 13:42:59.846874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.797 [2024-10-28 13:42:59.894108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.055 [2024-10-28 13:42:59.958980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:46.055 [2024-10-28 13:42:59.959033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.622 BaseBdev1_malloc 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.622 [2024-10-28 13:43:00.734967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:46.622 [2024-10-28 13:43:00.735193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:46.622 [2024-10-28 13:43:00.735364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:46.622 [2024-10-28 13:43:00.735519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:46.622 [2024-10-28 13:43:00.738576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:46.622 [2024-10-28 13:43:00.738633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:46.622 BaseBdev1 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.622 BaseBdev2_malloc 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.622 [2024-10-28 13:43:00.759977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:46.622 [2024-10-28 13:43:00.760176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:46.622 [2024-10-28 13:43:00.760329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:46.622 [2024-10-28 13:43:00.760496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:46.622 [2024-10-28 13:43:00.763493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:46.622 [2024-10-28 13:43:00.763654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:46.622 BaseBdev2 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.622 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.881 BaseBdev3_malloc 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.881 [2024-10-28 13:43:00.784476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:46.881 [2024-10-28 13:43:00.784663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:46.881 [2024-10-28 13:43:00.784701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:46.881 [2024-10-28 13:43:00.784719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:46.881 [2024-10-28 13:43:00.787691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:46.881 BaseBdev3 00:31:46.881 [2024-10-28 13:43:00.787917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.881 spare_malloc 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.881 spare_delay 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.881 [2024-10-28 13:43:00.830920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:46.881 [2024-10-28 13:43:00.831134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:46.881 [2024-10-28 13:43:00.831297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:31:46.881 [2024-10-28 13:43:00.831443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:46.881 [2024-10-28 13:43:00.834539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:46.881 [2024-10-28 13:43:00.834730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:46.881 spare 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.881 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.881 [2024-10-28 13:43:00.839035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:46.881 [2024-10-28 13:43:00.841752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:46.881 [2024-10-28 13:43:00.841975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:46.881 [2024-10-28 13:43:00.842103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:31:46.881 [2024-10-28 13:43:00.842119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:46.881 [2024-10-28 13:43:00.842516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:31:46.881 [2024-10-28 13:43:00.843035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:31:46.881 [2024-10-28 13:43:00.843056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:31:46.882 [2024-10-28 13:43:00.843274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:46.882 "name": "raid_bdev1", 00:31:46.882 "uuid": "2b438b75-4ce4-423d-a5ca-b99f4afbbb1a", 00:31:46.882 "strip_size_kb": 64, 00:31:46.882 "state": "online", 00:31:46.882 "raid_level": "raid5f", 00:31:46.882 "superblock": false, 00:31:46.882 "num_base_bdevs": 3, 00:31:46.882 "num_base_bdevs_discovered": 3, 00:31:46.882 "num_base_bdevs_operational": 3, 00:31:46.882 "base_bdevs_list": [ 00:31:46.882 { 00:31:46.882 "name": "BaseBdev1", 00:31:46.882 "uuid": "8b376c4b-6d5a-58a0-9429-20378375bf67", 00:31:46.882 "is_configured": true, 00:31:46.882 "data_offset": 0, 00:31:46.882 "data_size": 65536 00:31:46.882 }, 00:31:46.882 { 00:31:46.882 "name": "BaseBdev2", 00:31:46.882 "uuid": "f6d718d0-27c8-5606-a9ca-59f15ac5624f", 00:31:46.882 "is_configured": true, 00:31:46.882 "data_offset": 0, 00:31:46.882 "data_size": 65536 00:31:46.882 }, 00:31:46.882 { 00:31:46.882 "name": "BaseBdev3", 00:31:46.882 "uuid": "a8350288-b804-5c67-8a13-50a668d2598a", 00:31:46.882 "is_configured": true, 00:31:46.882 "data_offset": 0, 00:31:46.882 "data_size": 65536 00:31:46.882 } 00:31:46.882 ] 00:31:46.882 }' 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:46.882 13:43:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:31:47.448 [2024-10-28 13:43:01.355716] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:47.448 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:47.706 [2024-10-28 13:43:01.739632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:31:47.706 /dev/nbd0 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:47.706 1+0 records in 00:31:47.706 1+0 records out 00:31:47.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601622 s, 6.8 MB/s 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:31:47.706 13:43:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:31:48.279 512+0 records in 00:31:48.279 512+0 records out 00:31:48.279 67108864 bytes (67 MB, 64 MiB) copied, 0.42609 s, 157 MB/s 00:31:48.279 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:31:48.279 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:48.279 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:48.279 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:48.279 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:48.279 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:48.279 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:48.537 [2024-10-28 13:43:02.526639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.537 [2024-10-28 13:43:02.538728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:48.537 "name": "raid_bdev1", 00:31:48.537 "uuid": "2b438b75-4ce4-423d-a5ca-b99f4afbbb1a", 00:31:48.537 "strip_size_kb": 64, 00:31:48.537 "state": "online", 00:31:48.537 "raid_level": "raid5f", 00:31:48.537 "superblock": false, 00:31:48.537 "num_base_bdevs": 3, 00:31:48.537 "num_base_bdevs_discovered": 2, 00:31:48.537 "num_base_bdevs_operational": 2, 00:31:48.537 "base_bdevs_list": [ 00:31:48.537 { 00:31:48.537 "name": null, 00:31:48.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.537 "is_configured": false, 00:31:48.537 "data_offset": 0, 00:31:48.537 "data_size": 65536 00:31:48.537 }, 00:31:48.537 { 00:31:48.537 "name": "BaseBdev2", 00:31:48.537 "uuid": "f6d718d0-27c8-5606-a9ca-59f15ac5624f", 00:31:48.537 "is_configured": true, 00:31:48.537 "data_offset": 0, 00:31:48.537 "data_size": 65536 00:31:48.537 }, 00:31:48.537 { 00:31:48.537 "name": "BaseBdev3", 00:31:48.537 "uuid": "a8350288-b804-5c67-8a13-50a668d2598a", 00:31:48.537 "is_configured": true, 00:31:48.537 "data_offset": 0, 00:31:48.537 "data_size": 65536 00:31:48.537 } 00:31:48.537 ] 00:31:48.537 }' 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:48.537 13:43:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.103 13:43:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:49.103 13:43:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.103 13:43:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.103 [2024-10-28 13:43:03.018916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:49.103 [2024-10-28 13:43:03.025389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ba90 00:31:49.103 13:43:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.103 13:43:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:31:49.103 [2024-10-28 13:43:03.028504] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:50.040 "name": "raid_bdev1", 00:31:50.040 "uuid": "2b438b75-4ce4-423d-a5ca-b99f4afbbb1a", 00:31:50.040 "strip_size_kb": 64, 00:31:50.040 "state": "online", 00:31:50.040 "raid_level": "raid5f", 00:31:50.040 "superblock": false, 00:31:50.040 "num_base_bdevs": 3, 00:31:50.040 "num_base_bdevs_discovered": 3, 00:31:50.040 "num_base_bdevs_operational": 3, 00:31:50.040 "process": { 00:31:50.040 "type": "rebuild", 00:31:50.040 "target": "spare", 00:31:50.040 "progress": { 00:31:50.040 "blocks": 20480, 00:31:50.040 "percent": 15 00:31:50.040 } 00:31:50.040 }, 00:31:50.040 "base_bdevs_list": [ 00:31:50.040 { 00:31:50.040 "name": "spare", 00:31:50.040 "uuid": "57285df2-4669-55ec-9be1-72e67bb41684", 00:31:50.040 "is_configured": true, 00:31:50.040 "data_offset": 0, 00:31:50.040 "data_size": 65536 00:31:50.040 }, 00:31:50.040 { 00:31:50.040 "name": "BaseBdev2", 00:31:50.040 "uuid": "f6d718d0-27c8-5606-a9ca-59f15ac5624f", 00:31:50.040 "is_configured": true, 00:31:50.040 "data_offset": 0, 00:31:50.040 "data_size": 65536 00:31:50.040 }, 00:31:50.040 { 00:31:50.040 "name": "BaseBdev3", 00:31:50.040 "uuid": "a8350288-b804-5c67-8a13-50a668d2598a", 00:31:50.040 "is_configured": true, 00:31:50.040 "data_offset": 0, 00:31:50.040 "data_size": 65536 00:31:50.040 } 00:31:50.040 ] 00:31:50.040 }' 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.040 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.299 [2024-10-28 13:43:04.198222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:50.299 [2024-10-28 13:43:04.241595] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:50.299 [2024-10-28 13:43:04.241698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:50.299 [2024-10-28 13:43:04.241726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:50.299 [2024-10-28 13:43:04.241738] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:50.299 "name": "raid_bdev1", 00:31:50.299 "uuid": "2b438b75-4ce4-423d-a5ca-b99f4afbbb1a", 00:31:50.299 "strip_size_kb": 64, 00:31:50.299 "state": "online", 00:31:50.299 "raid_level": "raid5f", 00:31:50.299 "superblock": false, 00:31:50.299 "num_base_bdevs": 3, 00:31:50.299 "num_base_bdevs_discovered": 2, 00:31:50.299 "num_base_bdevs_operational": 2, 00:31:50.299 "base_bdevs_list": [ 00:31:50.299 { 00:31:50.299 "name": null, 00:31:50.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.299 "is_configured": false, 00:31:50.299 "data_offset": 0, 00:31:50.299 "data_size": 65536 00:31:50.299 }, 00:31:50.299 { 00:31:50.299 "name": "BaseBdev2", 00:31:50.299 "uuid": "f6d718d0-27c8-5606-a9ca-59f15ac5624f", 00:31:50.299 "is_configured": true, 00:31:50.299 "data_offset": 0, 00:31:50.299 "data_size": 65536 00:31:50.299 }, 00:31:50.299 { 00:31:50.299 "name": "BaseBdev3", 00:31:50.299 "uuid": "a8350288-b804-5c67-8a13-50a668d2598a", 00:31:50.299 "is_configured": true, 00:31:50.299 "data_offset": 0, 00:31:50.299 "data_size": 65536 00:31:50.299 } 00:31:50.299 ] 00:31:50.299 }' 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:50.299 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.865 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:50.865 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:50.865 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:50.865 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:50.865 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:50.865 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.865 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.865 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.865 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.865 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.865 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:50.865 "name": "raid_bdev1", 00:31:50.865 "uuid": "2b438b75-4ce4-423d-a5ca-b99f4afbbb1a", 00:31:50.865 "strip_size_kb": 64, 00:31:50.865 "state": "online", 00:31:50.866 "raid_level": "raid5f", 00:31:50.866 "superblock": false, 00:31:50.866 "num_base_bdevs": 3, 00:31:50.866 "num_base_bdevs_discovered": 2, 00:31:50.866 "num_base_bdevs_operational": 2, 00:31:50.866 "base_bdevs_list": [ 00:31:50.866 { 00:31:50.866 "name": null, 00:31:50.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.866 "is_configured": false, 00:31:50.866 "data_offset": 0, 00:31:50.866 "data_size": 65536 00:31:50.866 }, 00:31:50.866 { 00:31:50.866 "name": "BaseBdev2", 00:31:50.866 "uuid": "f6d718d0-27c8-5606-a9ca-59f15ac5624f", 00:31:50.866 "is_configured": true, 00:31:50.866 "data_offset": 0, 00:31:50.866 "data_size": 65536 00:31:50.866 }, 00:31:50.866 { 00:31:50.866 "name": "BaseBdev3", 00:31:50.866 "uuid": "a8350288-b804-5c67-8a13-50a668d2598a", 00:31:50.866 "is_configured": true, 00:31:50.866 "data_offset": 0, 00:31:50.866 "data_size": 65536 00:31:50.866 } 00:31:50.866 ] 00:31:50.866 }' 00:31:50.866 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:50.866 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:50.866 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:50.866 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:50.866 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:50.866 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.866 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.866 [2024-10-28 13:43:04.941602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:50.866 [2024-10-28 13:43:04.948183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bb60 00:31:50.866 13:43:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.866 13:43:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:31:50.866 [2024-10-28 13:43:04.951023] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:51.801 13:43:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:51.801 13:43:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:51.801 13:43:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:51.801 13:43:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:51.801 13:43:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:51.801 13:43:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.801 13:43:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.801 13:43:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:52.060 13:43:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:52.060 13:43:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:52.060 "name": "raid_bdev1", 00:31:52.060 "uuid": "2b438b75-4ce4-423d-a5ca-b99f4afbbb1a", 00:31:52.060 "strip_size_kb": 64, 00:31:52.060 "state": "online", 00:31:52.060 "raid_level": "raid5f", 00:31:52.060 "superblock": false, 00:31:52.060 "num_base_bdevs": 3, 00:31:52.060 "num_base_bdevs_discovered": 3, 00:31:52.060 "num_base_bdevs_operational": 3, 00:31:52.060 "process": { 00:31:52.060 "type": "rebuild", 00:31:52.060 "target": "spare", 00:31:52.060 "progress": { 00:31:52.060 "blocks": 18432, 00:31:52.060 "percent": 14 00:31:52.060 } 00:31:52.060 }, 00:31:52.060 "base_bdevs_list": [ 00:31:52.060 { 00:31:52.060 "name": "spare", 00:31:52.060 "uuid": "57285df2-4669-55ec-9be1-72e67bb41684", 00:31:52.060 "is_configured": true, 00:31:52.060 "data_offset": 0, 00:31:52.060 "data_size": 65536 00:31:52.060 }, 00:31:52.060 { 00:31:52.060 "name": "BaseBdev2", 00:31:52.060 "uuid": "f6d718d0-27c8-5606-a9ca-59f15ac5624f", 00:31:52.060 "is_configured": true, 00:31:52.060 "data_offset": 0, 00:31:52.060 "data_size": 65536 00:31:52.060 }, 00:31:52.060 { 00:31:52.060 "name": "BaseBdev3", 00:31:52.060 "uuid": "a8350288-b804-5c67-8a13-50a668d2598a", 00:31:52.060 "is_configured": true, 00:31:52.060 "data_offset": 0, 00:31:52.060 "data_size": 65536 00:31:52.060 } 00:31:52.060 ] 00:31:52.060 }' 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=520 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.060 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:52.060 "name": "raid_bdev1", 00:31:52.060 "uuid": "2b438b75-4ce4-423d-a5ca-b99f4afbbb1a", 00:31:52.060 "strip_size_kb": 64, 00:31:52.060 "state": "online", 00:31:52.060 "raid_level": "raid5f", 00:31:52.060 "superblock": false, 00:31:52.060 "num_base_bdevs": 3, 00:31:52.060 "num_base_bdevs_discovered": 3, 00:31:52.060 "num_base_bdevs_operational": 3, 00:31:52.060 "process": { 00:31:52.060 "type": "rebuild", 00:31:52.060 "target": "spare", 00:31:52.060 "progress": { 00:31:52.060 "blocks": 22528, 00:31:52.060 "percent": 17 00:31:52.060 } 00:31:52.060 }, 00:31:52.060 "base_bdevs_list": [ 00:31:52.060 { 00:31:52.060 "name": "spare", 00:31:52.060 "uuid": "57285df2-4669-55ec-9be1-72e67bb41684", 00:31:52.060 "is_configured": true, 00:31:52.060 "data_offset": 0, 00:31:52.060 "data_size": 65536 00:31:52.060 }, 00:31:52.060 { 00:31:52.060 "name": "BaseBdev2", 00:31:52.060 "uuid": "f6d718d0-27c8-5606-a9ca-59f15ac5624f", 00:31:52.060 "is_configured": true, 00:31:52.060 "data_offset": 0, 00:31:52.060 "data_size": 65536 00:31:52.060 }, 00:31:52.060 { 00:31:52.060 "name": "BaseBdev3", 00:31:52.060 "uuid": "a8350288-b804-5c67-8a13-50a668d2598a", 00:31:52.060 "is_configured": true, 00:31:52.060 "data_offset": 0, 00:31:52.060 "data_size": 65536 00:31:52.061 } 00:31:52.061 ] 00:31:52.061 }' 00:31:52.061 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:52.318 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:52.318 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:52.319 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:52.319 13:43:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:53.254 "name": "raid_bdev1", 00:31:53.254 "uuid": "2b438b75-4ce4-423d-a5ca-b99f4afbbb1a", 00:31:53.254 "strip_size_kb": 64, 00:31:53.254 "state": "online", 00:31:53.254 "raid_level": "raid5f", 00:31:53.254 "superblock": false, 00:31:53.254 "num_base_bdevs": 3, 00:31:53.254 "num_base_bdevs_discovered": 3, 00:31:53.254 "num_base_bdevs_operational": 3, 00:31:53.254 "process": { 00:31:53.254 "type": "rebuild", 00:31:53.254 "target": "spare", 00:31:53.254 "progress": { 00:31:53.254 "blocks": 47104, 00:31:53.254 "percent": 35 00:31:53.254 } 00:31:53.254 }, 00:31:53.254 "base_bdevs_list": [ 00:31:53.254 { 00:31:53.254 "name": "spare", 00:31:53.254 "uuid": "57285df2-4669-55ec-9be1-72e67bb41684", 00:31:53.254 "is_configured": true, 00:31:53.254 "data_offset": 0, 00:31:53.254 "data_size": 65536 00:31:53.254 }, 00:31:53.254 { 00:31:53.254 "name": "BaseBdev2", 00:31:53.254 "uuid": "f6d718d0-27c8-5606-a9ca-59f15ac5624f", 00:31:53.254 "is_configured": true, 00:31:53.254 "data_offset": 0, 00:31:53.254 "data_size": 65536 00:31:53.254 }, 00:31:53.254 { 00:31:53.254 "name": "BaseBdev3", 00:31:53.254 "uuid": "a8350288-b804-5c67-8a13-50a668d2598a", 00:31:53.254 "is_configured": true, 00:31:53.254 "data_offset": 0, 00:31:53.254 "data_size": 65536 00:31:53.254 } 00:31:53.254 ] 00:31:53.254 }' 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:53.254 13:43:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:53.512 13:43:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:53.512 13:43:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:54.445 "name": "raid_bdev1", 00:31:54.445 "uuid": "2b438b75-4ce4-423d-a5ca-b99f4afbbb1a", 00:31:54.445 "strip_size_kb": 64, 00:31:54.445 "state": "online", 00:31:54.445 "raid_level": "raid5f", 00:31:54.445 "superblock": false, 00:31:54.445 "num_base_bdevs": 3, 00:31:54.445 "num_base_bdevs_discovered": 3, 00:31:54.445 "num_base_bdevs_operational": 3, 00:31:54.445 "process": { 00:31:54.445 "type": "rebuild", 00:31:54.445 "target": "spare", 00:31:54.445 "progress": { 00:31:54.445 "blocks": 69632, 00:31:54.445 "percent": 53 00:31:54.445 } 00:31:54.445 }, 00:31:54.445 "base_bdevs_list": [ 00:31:54.445 { 00:31:54.445 "name": "spare", 00:31:54.445 "uuid": "57285df2-4669-55ec-9be1-72e67bb41684", 00:31:54.445 "is_configured": true, 00:31:54.445 "data_offset": 0, 00:31:54.445 "data_size": 65536 00:31:54.445 }, 00:31:54.445 { 00:31:54.445 "name": "BaseBdev2", 00:31:54.445 "uuid": "f6d718d0-27c8-5606-a9ca-59f15ac5624f", 00:31:54.445 "is_configured": true, 00:31:54.445 "data_offset": 0, 00:31:54.445 "data_size": 65536 00:31:54.445 }, 00:31:54.445 { 00:31:54.445 "name": "BaseBdev3", 00:31:54.445 "uuid": "a8350288-b804-5c67-8a13-50a668d2598a", 00:31:54.445 "is_configured": true, 00:31:54.445 "data_offset": 0, 00:31:54.445 "data_size": 65536 00:31:54.445 } 00:31:54.445 ] 00:31:54.445 }' 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:54.445 13:43:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:54.706 13:43:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:54.706 13:43:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:55.667 13:43:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:55.667 13:43:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:55.667 13:43:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:55.667 13:43:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:55.667 13:43:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:55.667 13:43:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:55.667 13:43:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:55.667 13:43:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.667 13:43:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.667 13:43:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:55.667 13:43:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.667 13:43:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:55.667 "name": "raid_bdev1", 00:31:55.667 "uuid": "2b438b75-4ce4-423d-a5ca-b99f4afbbb1a", 00:31:55.667 "strip_size_kb": 64, 00:31:55.667 "state": "online", 00:31:55.667 "raid_level": "raid5f", 00:31:55.667 "superblock": false, 00:31:55.667 "num_base_bdevs": 3, 00:31:55.667 "num_base_bdevs_discovered": 3, 00:31:55.667 "num_base_bdevs_operational": 3, 00:31:55.667 "process": { 00:31:55.667 "type": "rebuild", 00:31:55.667 "target": "spare", 00:31:55.667 "progress": { 00:31:55.667 "blocks": 94208, 00:31:55.667 "percent": 71 00:31:55.667 } 00:31:55.667 }, 00:31:55.667 "base_bdevs_list": [ 00:31:55.667 { 00:31:55.668 "name": "spare", 00:31:55.668 "uuid": "57285df2-4669-55ec-9be1-72e67bb41684", 00:31:55.668 "is_configured": true, 00:31:55.668 "data_offset": 0, 00:31:55.668 "data_size": 65536 00:31:55.668 }, 00:31:55.668 { 00:31:55.668 "name": "BaseBdev2", 00:31:55.668 "uuid": "f6d718d0-27c8-5606-a9ca-59f15ac5624f", 00:31:55.668 "is_configured": true, 00:31:55.668 "data_offset": 0, 00:31:55.668 "data_size": 65536 00:31:55.668 }, 00:31:55.668 { 00:31:55.668 "name": "BaseBdev3", 00:31:55.668 "uuid": "a8350288-b804-5c67-8a13-50a668d2598a", 00:31:55.668 "is_configured": true, 00:31:55.668 "data_offset": 0, 00:31:55.668 "data_size": 65536 00:31:55.668 } 00:31:55.668 ] 00:31:55.668 }' 00:31:55.668 13:43:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:55.668 13:43:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:55.668 13:43:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:55.668 13:43:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:55.668 13:43:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:57.039 "name": "raid_bdev1", 00:31:57.039 "uuid": "2b438b75-4ce4-423d-a5ca-b99f4afbbb1a", 00:31:57.039 "strip_size_kb": 64, 00:31:57.039 "state": "online", 00:31:57.039 "raid_level": "raid5f", 00:31:57.039 "superblock": false, 00:31:57.039 "num_base_bdevs": 3, 00:31:57.039 "num_base_bdevs_discovered": 3, 00:31:57.039 "num_base_bdevs_operational": 3, 00:31:57.039 "process": { 00:31:57.039 "type": "rebuild", 00:31:57.039 "target": "spare", 00:31:57.039 "progress": { 00:31:57.039 "blocks": 116736, 00:31:57.039 "percent": 89 00:31:57.039 } 00:31:57.039 }, 00:31:57.039 "base_bdevs_list": [ 00:31:57.039 { 00:31:57.039 "name": "spare", 00:31:57.039 "uuid": "57285df2-4669-55ec-9be1-72e67bb41684", 00:31:57.039 "is_configured": true, 00:31:57.039 "data_offset": 0, 00:31:57.039 "data_size": 65536 00:31:57.039 }, 00:31:57.039 { 00:31:57.039 "name": "BaseBdev2", 00:31:57.039 "uuid": "f6d718d0-27c8-5606-a9ca-59f15ac5624f", 00:31:57.039 "is_configured": true, 00:31:57.039 "data_offset": 0, 00:31:57.039 "data_size": 65536 00:31:57.039 }, 00:31:57.039 { 00:31:57.039 "name": "BaseBdev3", 00:31:57.039 "uuid": "a8350288-b804-5c67-8a13-50a668d2598a", 00:31:57.039 "is_configured": true, 00:31:57.039 "data_offset": 0, 00:31:57.039 "data_size": 65536 00:31:57.039 } 00:31:57.039 ] 00:31:57.039 }' 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:57.039 13:43:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:57.296 [2024-10-28 13:43:11.419557] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:57.296 [2024-10-28 13:43:11.419674] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:57.296 [2024-10-28 13:43:11.419758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:57.862 13:43:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:57.862 13:43:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:57.862 13:43:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:57.862 13:43:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:57.862 13:43:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:57.862 13:43:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:57.862 13:43:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.862 13:43:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.862 13:43:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.862 13:43:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.862 13:43:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.862 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:57.862 "name": "raid_bdev1", 00:31:57.862 "uuid": "2b438b75-4ce4-423d-a5ca-b99f4afbbb1a", 00:31:57.862 "strip_size_kb": 64, 00:31:57.862 "state": "online", 00:31:57.862 "raid_level": "raid5f", 00:31:57.862 "superblock": false, 00:31:57.862 "num_base_bdevs": 3, 00:31:57.862 "num_base_bdevs_discovered": 3, 00:31:57.862 "num_base_bdevs_operational": 3, 00:31:57.862 "base_bdevs_list": [ 00:31:57.862 { 00:31:57.862 "name": "spare", 00:31:57.862 "uuid": "57285df2-4669-55ec-9be1-72e67bb41684", 00:31:57.862 "is_configured": true, 00:31:57.862 "data_offset": 0, 00:31:57.862 "data_size": 65536 00:31:57.862 }, 00:31:57.862 { 00:31:57.862 "name": "BaseBdev2", 00:31:57.862 "uuid": "f6d718d0-27c8-5606-a9ca-59f15ac5624f", 00:31:57.862 "is_configured": true, 00:31:57.862 "data_offset": 0, 00:31:57.862 "data_size": 65536 00:31:57.862 }, 00:31:57.862 { 00:31:57.862 "name": "BaseBdev3", 00:31:57.862 "uuid": "a8350288-b804-5c67-8a13-50a668d2598a", 00:31:57.862 "is_configured": true, 00:31:57.862 "data_offset": 0, 00:31:57.862 "data_size": 65536 00:31:57.862 } 00:31:57.862 ] 00:31:57.862 }' 00:31:57.862 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:58.120 "name": "raid_bdev1", 00:31:58.120 "uuid": "2b438b75-4ce4-423d-a5ca-b99f4afbbb1a", 00:31:58.120 "strip_size_kb": 64, 00:31:58.120 "state": "online", 00:31:58.120 "raid_level": "raid5f", 00:31:58.120 "superblock": false, 00:31:58.120 "num_base_bdevs": 3, 00:31:58.120 "num_base_bdevs_discovered": 3, 00:31:58.120 "num_base_bdevs_operational": 3, 00:31:58.120 "base_bdevs_list": [ 00:31:58.120 { 00:31:58.120 "name": "spare", 00:31:58.120 "uuid": "57285df2-4669-55ec-9be1-72e67bb41684", 00:31:58.120 "is_configured": true, 00:31:58.120 "data_offset": 0, 00:31:58.120 "data_size": 65536 00:31:58.120 }, 00:31:58.120 { 00:31:58.120 "name": "BaseBdev2", 00:31:58.120 "uuid": "f6d718d0-27c8-5606-a9ca-59f15ac5624f", 00:31:58.120 "is_configured": true, 00:31:58.120 "data_offset": 0, 00:31:58.120 "data_size": 65536 00:31:58.120 }, 00:31:58.120 { 00:31:58.120 "name": "BaseBdev3", 00:31:58.120 "uuid": "a8350288-b804-5c67-8a13-50a668d2598a", 00:31:58.120 "is_configured": true, 00:31:58.120 "data_offset": 0, 00:31:58.120 "data_size": 65536 00:31:58.120 } 00:31:58.120 ] 00:31:58.120 }' 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:58.120 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.121 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.121 13:43:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.121 13:43:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.379 13:43:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.379 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:58.379 "name": "raid_bdev1", 00:31:58.379 "uuid": "2b438b75-4ce4-423d-a5ca-b99f4afbbb1a", 00:31:58.379 "strip_size_kb": 64, 00:31:58.379 "state": "online", 00:31:58.379 "raid_level": "raid5f", 00:31:58.379 "superblock": false, 00:31:58.379 "num_base_bdevs": 3, 00:31:58.379 "num_base_bdevs_discovered": 3, 00:31:58.379 "num_base_bdevs_operational": 3, 00:31:58.379 "base_bdevs_list": [ 00:31:58.379 { 00:31:58.379 "name": "spare", 00:31:58.379 "uuid": "57285df2-4669-55ec-9be1-72e67bb41684", 00:31:58.379 "is_configured": true, 00:31:58.379 "data_offset": 0, 00:31:58.379 "data_size": 65536 00:31:58.379 }, 00:31:58.379 { 00:31:58.379 "name": "BaseBdev2", 00:31:58.379 "uuid": "f6d718d0-27c8-5606-a9ca-59f15ac5624f", 00:31:58.379 "is_configured": true, 00:31:58.379 "data_offset": 0, 00:31:58.379 "data_size": 65536 00:31:58.379 }, 00:31:58.379 { 00:31:58.379 "name": "BaseBdev3", 00:31:58.379 "uuid": "a8350288-b804-5c67-8a13-50a668d2598a", 00:31:58.379 "is_configured": true, 00:31:58.379 "data_offset": 0, 00:31:58.379 "data_size": 65536 00:31:58.379 } 00:31:58.379 ] 00:31:58.379 }' 00:31:58.379 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:58.379 13:43:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.945 [2024-10-28 13:43:12.819027] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:58.945 [2024-10-28 13:43:12.819078] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:58.945 [2024-10-28 13:43:12.819232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:58.945 [2024-10-28 13:43:12.819358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:58.945 [2024-10-28 13:43:12.819391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:58.945 13:43:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:59.203 /dev/nbd0 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:59.204 1+0 records in 00:31:59.204 1+0 records out 00:31:59.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345615 s, 11.9 MB/s 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:59.204 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:31:59.462 /dev/nbd1 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:59.462 1+0 records in 00:31:59.462 1+0 records out 00:31:59.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322042 s, 12.7 MB/s 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:59.462 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:00.028 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:00.028 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:00.028 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:00.028 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:00.028 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:00.028 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:00.028 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:00.028 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:00.028 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:00.028 13:43:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:32:00.028 13:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:00.028 13:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:00.028 13:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:00.029 13:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:00.029 13:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:00.029 13:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:00.029 13:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:00.029 13:43:14 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:00.029 13:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:32:00.029 13:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 94326 00:32:00.029 13:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 94326 ']' 00:32:00.029 13:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 94326 00:32:00.029 13:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:32:00.029 13:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:00.029 13:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94326 00:32:00.287 killing process with pid 94326 00:32:00.287 Received shutdown signal, test time was about 60.000000 seconds 00:32:00.287 00:32:00.287 Latency(us) 00:32:00.287 [2024-10-28T13:43:14.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.287 [2024-10-28T13:43:14.447Z] =================================================================================================================== 00:32:00.287 [2024-10-28T13:43:14.447Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:00.287 13:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:00.287 13:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:00.287 13:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94326' 00:32:00.287 13:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 94326 00:32:00.287 [2024-10-28 13:43:14.207727] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:00.287 13:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 94326 00:32:00.287 [2024-10-28 13:43:14.248284] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:32:00.545 00:32:00.545 real 0m14.968s 00:32:00.545 user 0m19.638s 00:32:00.545 sys 0m1.923s 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:00.545 ************************************ 00:32:00.545 END TEST raid5f_rebuild_test 00:32:00.545 ************************************ 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.545 13:43:14 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:32:00.545 13:43:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:32:00.545 13:43:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:00.545 13:43:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:00.545 ************************************ 00:32:00.545 START TEST raid5f_rebuild_test_sb 00:32:00.545 ************************************ 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:32:00.545 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:32:00.546 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:32:00.546 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:32:00.546 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:32:00.546 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:32:00.546 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=94761 00:32:00.546 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 94761 00:32:00.546 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:00.546 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 94761 ']' 00:32:00.546 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.546 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:00.546 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.546 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:00.546 13:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.546 [2024-10-28 13:43:14.686535] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:32:00.546 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:00.546 Zero copy mechanism will not be used. 00:32:00.546 [2024-10-28 13:43:14.686763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94761 ] 00:32:00.804 [2024-10-28 13:43:14.840765] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:00.804 [2024-10-28 13:43:14.871209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.804 [2024-10-28 13:43:14.927304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.062 [2024-10-28 13:43:14.989683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:01.062 [2024-10-28 13:43:14.989743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.629 BaseBdev1_malloc 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.629 [2024-10-28 13:43:15.698296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:01.629 [2024-10-28 13:43:15.698380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:01.629 [2024-10-28 13:43:15.698420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:01.629 [2024-10-28 13:43:15.698449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:01.629 [2024-10-28 13:43:15.701575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:01.629 [2024-10-28 13:43:15.701633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:01.629 BaseBdev1 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.629 BaseBdev2_malloc 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.629 [2024-10-28 13:43:15.731125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:01.629 [2024-10-28 13:43:15.731249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:01.629 [2024-10-28 13:43:15.731276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:01.629 [2024-10-28 13:43:15.731305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:01.629 [2024-10-28 13:43:15.734242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:01.629 [2024-10-28 13:43:15.734303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:01.629 BaseBdev2 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.629 BaseBdev3_malloc 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.629 [2024-10-28 13:43:15.759161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:01.629 [2024-10-28 13:43:15.759252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:01.629 [2024-10-28 13:43:15.759298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:01.629 [2024-10-28 13:43:15.759317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:01.629 [2024-10-28 13:43:15.762506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:01.629 [2024-10-28 13:43:15.762584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:01.629 BaseBdev3 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.629 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.888 spare_malloc 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.888 spare_delay 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.888 [2024-10-28 13:43:15.808598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:01.888 [2024-10-28 13:43:15.808674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:01.888 [2024-10-28 13:43:15.808700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:32:01.888 [2024-10-28 13:43:15.808732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:01.888 [2024-10-28 13:43:15.811640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:01.888 [2024-10-28 13:43:15.811693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:01.888 spare 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.888 [2024-10-28 13:43:15.820717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:01.888 [2024-10-28 13:43:15.823189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:01.888 [2024-10-28 13:43:15.823301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:01.888 [2024-10-28 13:43:15.823572] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:32:01.888 [2024-10-28 13:43:15.823607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:01.888 [2024-10-28 13:43:15.823937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:32:01.888 [2024-10-28 13:43:15.824558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:32:01.888 [2024-10-28 13:43:15.824605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:32:01.888 [2024-10-28 13:43:15.824844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:01.888 "name": "raid_bdev1", 00:32:01.888 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:01.888 "strip_size_kb": 64, 00:32:01.888 "state": "online", 00:32:01.888 "raid_level": "raid5f", 00:32:01.888 "superblock": true, 00:32:01.888 "num_base_bdevs": 3, 00:32:01.888 "num_base_bdevs_discovered": 3, 00:32:01.888 "num_base_bdevs_operational": 3, 00:32:01.888 "base_bdevs_list": [ 00:32:01.888 { 00:32:01.888 "name": "BaseBdev1", 00:32:01.888 "uuid": "7ae13bcf-7e44-5f47-95a5-8b7bbe77b81c", 00:32:01.888 "is_configured": true, 00:32:01.888 "data_offset": 2048, 00:32:01.888 "data_size": 63488 00:32:01.888 }, 00:32:01.888 { 00:32:01.888 "name": "BaseBdev2", 00:32:01.888 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:01.888 "is_configured": true, 00:32:01.888 "data_offset": 2048, 00:32:01.888 "data_size": 63488 00:32:01.888 }, 00:32:01.888 { 00:32:01.888 "name": "BaseBdev3", 00:32:01.888 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:01.888 "is_configured": true, 00:32:01.888 "data_offset": 2048, 00:32:01.888 "data_size": 63488 00:32:01.888 } 00:32:01.888 ] 00:32:01.888 }' 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:01.888 13:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.471 [2024-10-28 13:43:16.341470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:02.471 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:02.730 [2024-10-28 13:43:16.725321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:32:02.730 /dev/nbd0 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:02.730 1+0 records in 00:32:02.730 1+0 records out 00:32:02.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270082 s, 15.2 MB/s 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:32:02.730 13:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:32:03.297 496+0 records in 00:32:03.297 496+0 records out 00:32:03.297 65011712 bytes (65 MB, 62 MiB) copied, 0.416373 s, 156 MB/s 00:32:03.297 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:03.297 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:03.297 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:03.297 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:03.297 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:32:03.297 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:03.297 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:03.555 [2024-10-28 13:43:17.500744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:03.555 [2024-10-28 13:43:17.508841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:03.555 "name": "raid_bdev1", 00:32:03.555 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:03.555 "strip_size_kb": 64, 00:32:03.555 "state": "online", 00:32:03.555 "raid_level": "raid5f", 00:32:03.555 "superblock": true, 00:32:03.555 "num_base_bdevs": 3, 00:32:03.555 "num_base_bdevs_discovered": 2, 00:32:03.555 "num_base_bdevs_operational": 2, 00:32:03.555 "base_bdevs_list": [ 00:32:03.555 { 00:32:03.555 "name": null, 00:32:03.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:03.555 "is_configured": false, 00:32:03.555 "data_offset": 0, 00:32:03.555 "data_size": 63488 00:32:03.555 }, 00:32:03.555 { 00:32:03.555 "name": "BaseBdev2", 00:32:03.555 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:03.555 "is_configured": true, 00:32:03.555 "data_offset": 2048, 00:32:03.555 "data_size": 63488 00:32:03.555 }, 00:32:03.555 { 00:32:03.555 "name": "BaseBdev3", 00:32:03.555 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:03.555 "is_configured": true, 00:32:03.555 "data_offset": 2048, 00:32:03.555 "data_size": 63488 00:32:03.555 } 00:32:03.555 ] 00:32:03.555 }' 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:03.555 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:04.121 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:04.121 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.121 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:04.121 [2024-10-28 13:43:17.989274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:04.121 [2024-10-28 13:43:17.995994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029390 00:32:04.121 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.121 13:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:32:04.121 [2024-10-28 13:43:17.998954] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:05.056 13:43:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:05.056 "name": "raid_bdev1", 00:32:05.056 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:05.056 "strip_size_kb": 64, 00:32:05.056 "state": "online", 00:32:05.056 "raid_level": "raid5f", 00:32:05.056 "superblock": true, 00:32:05.056 "num_base_bdevs": 3, 00:32:05.056 "num_base_bdevs_discovered": 3, 00:32:05.056 "num_base_bdevs_operational": 3, 00:32:05.056 "process": { 00:32:05.056 "type": "rebuild", 00:32:05.056 "target": "spare", 00:32:05.056 "progress": { 00:32:05.056 "blocks": 20480, 00:32:05.056 "percent": 16 00:32:05.056 } 00:32:05.056 }, 00:32:05.056 "base_bdevs_list": [ 00:32:05.056 { 00:32:05.056 "name": "spare", 00:32:05.056 "uuid": "9ec10b2b-c75c-558d-af6c-718108b8337e", 00:32:05.056 "is_configured": true, 00:32:05.056 "data_offset": 2048, 00:32:05.056 "data_size": 63488 00:32:05.056 }, 00:32:05.056 { 00:32:05.056 "name": "BaseBdev2", 00:32:05.056 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:05.056 "is_configured": true, 00:32:05.056 "data_offset": 2048, 00:32:05.056 "data_size": 63488 00:32:05.056 }, 00:32:05.056 { 00:32:05.056 "name": "BaseBdev3", 00:32:05.056 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:05.056 "is_configured": true, 00:32:05.056 "data_offset": 2048, 00:32:05.056 "data_size": 63488 00:32:05.056 } 00:32:05.056 ] 00:32:05.056 }' 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.056 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:05.056 [2024-10-28 13:43:19.168773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:05.056 [2024-10-28 13:43:19.212591] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:05.056 [2024-10-28 13:43:19.212674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:05.056 [2024-10-28 13:43:19.212716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:05.056 [2024-10-28 13:43:19.212734] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:05.315 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.316 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:05.316 "name": "raid_bdev1", 00:32:05.316 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:05.316 "strip_size_kb": 64, 00:32:05.316 "state": "online", 00:32:05.316 "raid_level": "raid5f", 00:32:05.316 "superblock": true, 00:32:05.316 "num_base_bdevs": 3, 00:32:05.316 "num_base_bdevs_discovered": 2, 00:32:05.316 "num_base_bdevs_operational": 2, 00:32:05.316 "base_bdevs_list": [ 00:32:05.316 { 00:32:05.316 "name": null, 00:32:05.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:05.316 "is_configured": false, 00:32:05.316 "data_offset": 0, 00:32:05.316 "data_size": 63488 00:32:05.316 }, 00:32:05.316 { 00:32:05.316 "name": "BaseBdev2", 00:32:05.316 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:05.316 "is_configured": true, 00:32:05.316 "data_offset": 2048, 00:32:05.316 "data_size": 63488 00:32:05.316 }, 00:32:05.316 { 00:32:05.316 "name": "BaseBdev3", 00:32:05.316 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:05.316 "is_configured": true, 00:32:05.316 "data_offset": 2048, 00:32:05.316 "data_size": 63488 00:32:05.316 } 00:32:05.316 ] 00:32:05.316 }' 00:32:05.316 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:05.316 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:05.882 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:05.882 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:05.882 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:05.882 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:05.882 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:05.882 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:05.882 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.882 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.882 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:05.882 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.882 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:05.882 "name": "raid_bdev1", 00:32:05.882 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:05.882 "strip_size_kb": 64, 00:32:05.882 "state": "online", 00:32:05.882 "raid_level": "raid5f", 00:32:05.882 "superblock": true, 00:32:05.882 "num_base_bdevs": 3, 00:32:05.882 "num_base_bdevs_discovered": 2, 00:32:05.882 "num_base_bdevs_operational": 2, 00:32:05.882 "base_bdevs_list": [ 00:32:05.882 { 00:32:05.882 "name": null, 00:32:05.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:05.882 "is_configured": false, 00:32:05.882 "data_offset": 0, 00:32:05.882 "data_size": 63488 00:32:05.882 }, 00:32:05.882 { 00:32:05.882 "name": "BaseBdev2", 00:32:05.882 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:05.882 "is_configured": true, 00:32:05.882 "data_offset": 2048, 00:32:05.882 "data_size": 63488 00:32:05.882 }, 00:32:05.882 { 00:32:05.882 "name": "BaseBdev3", 00:32:05.882 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:05.882 "is_configured": true, 00:32:05.882 "data_offset": 2048, 00:32:05.882 "data_size": 63488 00:32:05.882 } 00:32:05.882 ] 00:32:05.882 }' 00:32:05.882 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:05.882 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:05.882 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:05.883 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:05.883 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:05.883 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.883 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:05.883 [2024-10-28 13:43:19.904627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:05.883 [2024-10-28 13:43:19.910868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029460 00:32:05.883 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.883 13:43:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:32:05.883 [2024-10-28 13:43:19.913905] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:06.817 13:43:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:06.817 13:43:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:06.817 13:43:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:06.817 13:43:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:06.817 13:43:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:06.817 13:43:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:06.817 13:43:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.817 13:43:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.817 13:43:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.817 13:43:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.817 13:43:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:06.817 "name": "raid_bdev1", 00:32:06.817 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:06.817 "strip_size_kb": 64, 00:32:06.817 "state": "online", 00:32:06.817 "raid_level": "raid5f", 00:32:06.817 "superblock": true, 00:32:06.817 "num_base_bdevs": 3, 00:32:06.817 "num_base_bdevs_discovered": 3, 00:32:06.817 "num_base_bdevs_operational": 3, 00:32:06.818 "process": { 00:32:06.818 "type": "rebuild", 00:32:06.818 "target": "spare", 00:32:06.818 "progress": { 00:32:06.818 "blocks": 20480, 00:32:06.818 "percent": 16 00:32:06.818 } 00:32:06.818 }, 00:32:06.818 "base_bdevs_list": [ 00:32:06.818 { 00:32:06.818 "name": "spare", 00:32:06.818 "uuid": "9ec10b2b-c75c-558d-af6c-718108b8337e", 00:32:06.818 "is_configured": true, 00:32:06.818 "data_offset": 2048, 00:32:06.818 "data_size": 63488 00:32:06.818 }, 00:32:06.818 { 00:32:06.818 "name": "BaseBdev2", 00:32:06.818 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:06.818 "is_configured": true, 00:32:06.818 "data_offset": 2048, 00:32:06.818 "data_size": 63488 00:32:06.818 }, 00:32:06.818 { 00:32:06.818 "name": "BaseBdev3", 00:32:06.818 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:06.818 "is_configured": true, 00:32:06.818 "data_offset": 2048, 00:32:06.818 "data_size": 63488 00:32:06.818 } 00:32:06.818 ] 00:32:06.818 }' 00:32:07.076 13:43:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:32:07.076 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=535 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:07.076 "name": "raid_bdev1", 00:32:07.076 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:07.076 "strip_size_kb": 64, 00:32:07.076 "state": "online", 00:32:07.076 "raid_level": "raid5f", 00:32:07.076 "superblock": true, 00:32:07.076 "num_base_bdevs": 3, 00:32:07.076 "num_base_bdevs_discovered": 3, 00:32:07.076 "num_base_bdevs_operational": 3, 00:32:07.076 "process": { 00:32:07.076 "type": "rebuild", 00:32:07.076 "target": "spare", 00:32:07.076 "progress": { 00:32:07.076 "blocks": 22528, 00:32:07.076 "percent": 17 00:32:07.076 } 00:32:07.076 }, 00:32:07.076 "base_bdevs_list": [ 00:32:07.076 { 00:32:07.076 "name": "spare", 00:32:07.076 "uuid": "9ec10b2b-c75c-558d-af6c-718108b8337e", 00:32:07.076 "is_configured": true, 00:32:07.076 "data_offset": 2048, 00:32:07.076 "data_size": 63488 00:32:07.076 }, 00:32:07.076 { 00:32:07.076 "name": "BaseBdev2", 00:32:07.076 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:07.076 "is_configured": true, 00:32:07.076 "data_offset": 2048, 00:32:07.076 "data_size": 63488 00:32:07.076 }, 00:32:07.076 { 00:32:07.076 "name": "BaseBdev3", 00:32:07.076 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:07.076 "is_configured": true, 00:32:07.076 "data_offset": 2048, 00:32:07.076 "data_size": 63488 00:32:07.076 } 00:32:07.076 ] 00:32:07.076 }' 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:07.076 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:07.334 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:07.334 13:43:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:08.270 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:08.270 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:08.270 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:08.270 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:08.270 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:08.270 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:08.270 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:08.270 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.270 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.270 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.270 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.270 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:08.270 "name": "raid_bdev1", 00:32:08.270 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:08.270 "strip_size_kb": 64, 00:32:08.270 "state": "online", 00:32:08.271 "raid_level": "raid5f", 00:32:08.271 "superblock": true, 00:32:08.271 "num_base_bdevs": 3, 00:32:08.271 "num_base_bdevs_discovered": 3, 00:32:08.271 "num_base_bdevs_operational": 3, 00:32:08.271 "process": { 00:32:08.271 "type": "rebuild", 00:32:08.271 "target": "spare", 00:32:08.271 "progress": { 00:32:08.271 "blocks": 47104, 00:32:08.271 "percent": 37 00:32:08.271 } 00:32:08.271 }, 00:32:08.271 "base_bdevs_list": [ 00:32:08.271 { 00:32:08.271 "name": "spare", 00:32:08.271 "uuid": "9ec10b2b-c75c-558d-af6c-718108b8337e", 00:32:08.271 "is_configured": true, 00:32:08.271 "data_offset": 2048, 00:32:08.271 "data_size": 63488 00:32:08.271 }, 00:32:08.271 { 00:32:08.271 "name": "BaseBdev2", 00:32:08.271 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:08.271 "is_configured": true, 00:32:08.271 "data_offset": 2048, 00:32:08.271 "data_size": 63488 00:32:08.271 }, 00:32:08.271 { 00:32:08.271 "name": "BaseBdev3", 00:32:08.271 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:08.271 "is_configured": true, 00:32:08.271 "data_offset": 2048, 00:32:08.271 "data_size": 63488 00:32:08.271 } 00:32:08.271 ] 00:32:08.271 }' 00:32:08.271 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:08.271 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:08.271 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:08.271 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:08.271 13:43:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:09.648 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:09.648 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:09.648 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:09.648 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:09.648 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:09.648 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:09.648 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:09.648 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:09.649 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.649 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.649 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.649 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:09.649 "name": "raid_bdev1", 00:32:09.649 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:09.649 "strip_size_kb": 64, 00:32:09.649 "state": "online", 00:32:09.649 "raid_level": "raid5f", 00:32:09.649 "superblock": true, 00:32:09.649 "num_base_bdevs": 3, 00:32:09.649 "num_base_bdevs_discovered": 3, 00:32:09.649 "num_base_bdevs_operational": 3, 00:32:09.649 "process": { 00:32:09.649 "type": "rebuild", 00:32:09.649 "target": "spare", 00:32:09.649 "progress": { 00:32:09.649 "blocks": 69632, 00:32:09.649 "percent": 54 00:32:09.649 } 00:32:09.649 }, 00:32:09.649 "base_bdevs_list": [ 00:32:09.649 { 00:32:09.649 "name": "spare", 00:32:09.649 "uuid": "9ec10b2b-c75c-558d-af6c-718108b8337e", 00:32:09.649 "is_configured": true, 00:32:09.649 "data_offset": 2048, 00:32:09.649 "data_size": 63488 00:32:09.649 }, 00:32:09.649 { 00:32:09.649 "name": "BaseBdev2", 00:32:09.649 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:09.649 "is_configured": true, 00:32:09.649 "data_offset": 2048, 00:32:09.649 "data_size": 63488 00:32:09.649 }, 00:32:09.649 { 00:32:09.649 "name": "BaseBdev3", 00:32:09.649 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:09.649 "is_configured": true, 00:32:09.649 "data_offset": 2048, 00:32:09.649 "data_size": 63488 00:32:09.649 } 00:32:09.649 ] 00:32:09.649 }' 00:32:09.649 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:09.649 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:09.649 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:09.649 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:09.649 13:43:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:10.585 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:10.585 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:10.585 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:10.585 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:10.585 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:10.586 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:10.586 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:10.586 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.586 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.586 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:10.586 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.586 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:10.586 "name": "raid_bdev1", 00:32:10.586 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:10.586 "strip_size_kb": 64, 00:32:10.586 "state": "online", 00:32:10.586 "raid_level": "raid5f", 00:32:10.586 "superblock": true, 00:32:10.586 "num_base_bdevs": 3, 00:32:10.586 "num_base_bdevs_discovered": 3, 00:32:10.586 "num_base_bdevs_operational": 3, 00:32:10.586 "process": { 00:32:10.586 "type": "rebuild", 00:32:10.586 "target": "spare", 00:32:10.586 "progress": { 00:32:10.586 "blocks": 94208, 00:32:10.586 "percent": 74 00:32:10.586 } 00:32:10.586 }, 00:32:10.586 "base_bdevs_list": [ 00:32:10.586 { 00:32:10.586 "name": "spare", 00:32:10.586 "uuid": "9ec10b2b-c75c-558d-af6c-718108b8337e", 00:32:10.586 "is_configured": true, 00:32:10.586 "data_offset": 2048, 00:32:10.586 "data_size": 63488 00:32:10.586 }, 00:32:10.586 { 00:32:10.586 "name": "BaseBdev2", 00:32:10.586 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:10.586 "is_configured": true, 00:32:10.586 "data_offset": 2048, 00:32:10.586 "data_size": 63488 00:32:10.586 }, 00:32:10.586 { 00:32:10.586 "name": "BaseBdev3", 00:32:10.586 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:10.586 "is_configured": true, 00:32:10.586 "data_offset": 2048, 00:32:10.586 "data_size": 63488 00:32:10.586 } 00:32:10.586 ] 00:32:10.586 }' 00:32:10.586 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:10.586 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:10.586 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:10.586 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:10.586 13:43:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:11.968 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:11.968 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:11.968 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:11.968 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:11.968 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:11.968 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:11.968 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.968 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.968 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.968 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:11.968 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.968 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:11.968 "name": "raid_bdev1", 00:32:11.968 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:11.968 "strip_size_kb": 64, 00:32:11.968 "state": "online", 00:32:11.968 "raid_level": "raid5f", 00:32:11.968 "superblock": true, 00:32:11.968 "num_base_bdevs": 3, 00:32:11.968 "num_base_bdevs_discovered": 3, 00:32:11.968 "num_base_bdevs_operational": 3, 00:32:11.968 "process": { 00:32:11.968 "type": "rebuild", 00:32:11.968 "target": "spare", 00:32:11.968 "progress": { 00:32:11.968 "blocks": 116736, 00:32:11.968 "percent": 91 00:32:11.968 } 00:32:11.968 }, 00:32:11.968 "base_bdevs_list": [ 00:32:11.968 { 00:32:11.968 "name": "spare", 00:32:11.968 "uuid": "9ec10b2b-c75c-558d-af6c-718108b8337e", 00:32:11.968 "is_configured": true, 00:32:11.968 "data_offset": 2048, 00:32:11.968 "data_size": 63488 00:32:11.968 }, 00:32:11.968 { 00:32:11.968 "name": "BaseBdev2", 00:32:11.968 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:11.968 "is_configured": true, 00:32:11.968 "data_offset": 2048, 00:32:11.968 "data_size": 63488 00:32:11.968 }, 00:32:11.968 { 00:32:11.968 "name": "BaseBdev3", 00:32:11.969 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:11.969 "is_configured": true, 00:32:11.969 "data_offset": 2048, 00:32:11.969 "data_size": 63488 00:32:11.969 } 00:32:11.969 ] 00:32:11.969 }' 00:32:11.969 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:11.969 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:11.969 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:11.969 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:11.969 13:43:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:12.227 [2024-10-28 13:43:26.181711] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:12.227 [2024-10-28 13:43:26.181828] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:12.227 [2024-10-28 13:43:26.181986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:12.794 13:43:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:12.794 13:43:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:12.794 13:43:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:12.794 13:43:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:12.794 13:43:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:12.794 13:43:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:12.794 13:43:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:12.794 13:43:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:12.794 13:43:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.794 13:43:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.794 13:43:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.056 13:43:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:13.056 "name": "raid_bdev1", 00:32:13.056 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:13.056 "strip_size_kb": 64, 00:32:13.056 "state": "online", 00:32:13.056 "raid_level": "raid5f", 00:32:13.056 "superblock": true, 00:32:13.056 "num_base_bdevs": 3, 00:32:13.056 "num_base_bdevs_discovered": 3, 00:32:13.056 "num_base_bdevs_operational": 3, 00:32:13.056 "base_bdevs_list": [ 00:32:13.056 { 00:32:13.056 "name": "spare", 00:32:13.056 "uuid": "9ec10b2b-c75c-558d-af6c-718108b8337e", 00:32:13.056 "is_configured": true, 00:32:13.056 "data_offset": 2048, 00:32:13.056 "data_size": 63488 00:32:13.056 }, 00:32:13.056 { 00:32:13.056 "name": "BaseBdev2", 00:32:13.056 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:13.056 "is_configured": true, 00:32:13.056 "data_offset": 2048, 00:32:13.056 "data_size": 63488 00:32:13.056 }, 00:32:13.056 { 00:32:13.056 "name": "BaseBdev3", 00:32:13.056 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:13.056 "is_configured": true, 00:32:13.056 "data_offset": 2048, 00:32:13.056 "data_size": 63488 00:32:13.056 } 00:32:13.056 ] 00:32:13.056 }' 00:32:13.056 13:43:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:13.056 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:13.056 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:13.056 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:32:13.056 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:32:13.056 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:13.056 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:13.056 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:13.056 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:13.056 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:13.056 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.056 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.056 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.056 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:13.057 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.057 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:13.057 "name": "raid_bdev1", 00:32:13.057 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:13.057 "strip_size_kb": 64, 00:32:13.057 "state": "online", 00:32:13.057 "raid_level": "raid5f", 00:32:13.057 "superblock": true, 00:32:13.057 "num_base_bdevs": 3, 00:32:13.057 "num_base_bdevs_discovered": 3, 00:32:13.057 "num_base_bdevs_operational": 3, 00:32:13.057 "base_bdevs_list": [ 00:32:13.057 { 00:32:13.057 "name": "spare", 00:32:13.057 "uuid": "9ec10b2b-c75c-558d-af6c-718108b8337e", 00:32:13.057 "is_configured": true, 00:32:13.057 "data_offset": 2048, 00:32:13.057 "data_size": 63488 00:32:13.057 }, 00:32:13.057 { 00:32:13.057 "name": "BaseBdev2", 00:32:13.057 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:13.057 "is_configured": true, 00:32:13.057 "data_offset": 2048, 00:32:13.057 "data_size": 63488 00:32:13.057 }, 00:32:13.057 { 00:32:13.057 "name": "BaseBdev3", 00:32:13.057 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:13.057 "is_configured": true, 00:32:13.057 "data_offset": 2048, 00:32:13.057 "data_size": 63488 00:32:13.057 } 00:32:13.057 ] 00:32:13.057 }' 00:32:13.057 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:13.057 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:13.057 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:13.322 "name": "raid_bdev1", 00:32:13.322 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:13.322 "strip_size_kb": 64, 00:32:13.322 "state": "online", 00:32:13.322 "raid_level": "raid5f", 00:32:13.322 "superblock": true, 00:32:13.322 "num_base_bdevs": 3, 00:32:13.322 "num_base_bdevs_discovered": 3, 00:32:13.322 "num_base_bdevs_operational": 3, 00:32:13.322 "base_bdevs_list": [ 00:32:13.322 { 00:32:13.322 "name": "spare", 00:32:13.322 "uuid": "9ec10b2b-c75c-558d-af6c-718108b8337e", 00:32:13.322 "is_configured": true, 00:32:13.322 "data_offset": 2048, 00:32:13.322 "data_size": 63488 00:32:13.322 }, 00:32:13.322 { 00:32:13.322 "name": "BaseBdev2", 00:32:13.322 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:13.322 "is_configured": true, 00:32:13.322 "data_offset": 2048, 00:32:13.322 "data_size": 63488 00:32:13.322 }, 00:32:13.322 { 00:32:13.322 "name": "BaseBdev3", 00:32:13.322 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:13.322 "is_configured": true, 00:32:13.322 "data_offset": 2048, 00:32:13.322 "data_size": 63488 00:32:13.322 } 00:32:13.322 ] 00:32:13.322 }' 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:13.322 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.890 [2024-10-28 13:43:27.781382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:13.890 [2024-10-28 13:43:27.781416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:13.890 [2024-10-28 13:43:27.781561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:13.890 [2024-10-28 13:43:27.781666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:13.890 [2024-10-28 13:43:27.781687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:13.890 13:43:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:14.149 /dev/nbd0 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:14.149 1+0 records in 00:32:14.149 1+0 records out 00:32:14.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396217 s, 10.3 MB/s 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:14.149 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:32:14.408 /dev/nbd1 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:14.408 1+0 records in 00:32:14.408 1+0 records out 00:32:14.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376298 s, 10.9 MB/s 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:14.408 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:14.667 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:32:14.667 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:14.667 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:14.667 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:14.667 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:32:14.667 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:14.667 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:14.925 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:14.925 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:14.925 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:14.925 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:14.925 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:14.925 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:14.925 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:14.925 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:14.925 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:14.925 13:43:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:32:15.184 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:15.184 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:15.184 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:15.184 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:15.184 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:15.184 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:15.185 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:15.185 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:15.185 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:32:15.185 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:32:15.185 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.185 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:15.185 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.185 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:15.185 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.185 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:15.185 [2024-10-28 13:43:29.318769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:15.185 [2024-10-28 13:43:29.318871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:15.185 [2024-10-28 13:43:29.318904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:32:15.185 [2024-10-28 13:43:29.318923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:15.185 [2024-10-28 13:43:29.322048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:15.185 [2024-10-28 13:43:29.322262] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:15.185 [2024-10-28 13:43:29.322376] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:15.185 [2024-10-28 13:43:29.322441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:15.185 [2024-10-28 13:43:29.322593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:15.185 [2024-10-28 13:43:29.322732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:15.185 spare 00:32:15.185 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.185 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:32:15.185 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.185 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:15.444 [2024-10-28 13:43:29.422915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:15.444 [2024-10-28 13:43:29.422953] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:15.444 [2024-10-28 13:43:29.423368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047b10 00:32:15.444 [2024-10-28 13:43:29.423974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:15.444 [2024-10-28 13:43:29.423992] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:15.444 [2024-10-28 13:43:29.424192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:15.444 "name": "raid_bdev1", 00:32:15.444 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:15.444 "strip_size_kb": 64, 00:32:15.444 "state": "online", 00:32:15.444 "raid_level": "raid5f", 00:32:15.444 "superblock": true, 00:32:15.444 "num_base_bdevs": 3, 00:32:15.444 "num_base_bdevs_discovered": 3, 00:32:15.444 "num_base_bdevs_operational": 3, 00:32:15.444 "base_bdevs_list": [ 00:32:15.444 { 00:32:15.444 "name": "spare", 00:32:15.444 "uuid": "9ec10b2b-c75c-558d-af6c-718108b8337e", 00:32:15.444 "is_configured": true, 00:32:15.444 "data_offset": 2048, 00:32:15.444 "data_size": 63488 00:32:15.444 }, 00:32:15.444 { 00:32:15.444 "name": "BaseBdev2", 00:32:15.444 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:15.444 "is_configured": true, 00:32:15.444 "data_offset": 2048, 00:32:15.444 "data_size": 63488 00:32:15.444 }, 00:32:15.444 { 00:32:15.444 "name": "BaseBdev3", 00:32:15.444 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:15.444 "is_configured": true, 00:32:15.444 "data_offset": 2048, 00:32:15.444 "data_size": 63488 00:32:15.444 } 00:32:15.444 ] 00:32:15.444 }' 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:15.444 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:16.009 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:16.009 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:16.009 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:16.009 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:16.009 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:16.009 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.009 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.009 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.009 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:16.009 13:43:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.009 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:16.009 "name": "raid_bdev1", 00:32:16.009 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:16.009 "strip_size_kb": 64, 00:32:16.009 "state": "online", 00:32:16.009 "raid_level": "raid5f", 00:32:16.009 "superblock": true, 00:32:16.009 "num_base_bdevs": 3, 00:32:16.009 "num_base_bdevs_discovered": 3, 00:32:16.009 "num_base_bdevs_operational": 3, 00:32:16.009 "base_bdevs_list": [ 00:32:16.009 { 00:32:16.009 "name": "spare", 00:32:16.009 "uuid": "9ec10b2b-c75c-558d-af6c-718108b8337e", 00:32:16.009 "is_configured": true, 00:32:16.009 "data_offset": 2048, 00:32:16.009 "data_size": 63488 00:32:16.009 }, 00:32:16.009 { 00:32:16.009 "name": "BaseBdev2", 00:32:16.009 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:16.009 "is_configured": true, 00:32:16.009 "data_offset": 2048, 00:32:16.009 "data_size": 63488 00:32:16.009 }, 00:32:16.009 { 00:32:16.009 "name": "BaseBdev3", 00:32:16.009 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:16.009 "is_configured": true, 00:32:16.009 "data_offset": 2048, 00:32:16.009 "data_size": 63488 00:32:16.009 } 00:32:16.009 ] 00:32:16.009 }' 00:32:16.009 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:16.009 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:16.009 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:16.009 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:16.009 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.009 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:16.009 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.009 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:16.009 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.268 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:32:16.268 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:16.268 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.268 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:16.268 [2024-10-28 13:43:30.191126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:16.268 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.268 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:16.268 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:16.268 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:16.268 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:16.268 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:16.268 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:16.268 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:16.269 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:16.269 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:16.269 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:16.269 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.269 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.269 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.269 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:16.269 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.269 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:16.269 "name": "raid_bdev1", 00:32:16.269 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:16.269 "strip_size_kb": 64, 00:32:16.269 "state": "online", 00:32:16.269 "raid_level": "raid5f", 00:32:16.269 "superblock": true, 00:32:16.269 "num_base_bdevs": 3, 00:32:16.269 "num_base_bdevs_discovered": 2, 00:32:16.269 "num_base_bdevs_operational": 2, 00:32:16.269 "base_bdevs_list": [ 00:32:16.269 { 00:32:16.269 "name": null, 00:32:16.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:16.269 "is_configured": false, 00:32:16.269 "data_offset": 0, 00:32:16.269 "data_size": 63488 00:32:16.269 }, 00:32:16.269 { 00:32:16.269 "name": "BaseBdev2", 00:32:16.269 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:16.269 "is_configured": true, 00:32:16.269 "data_offset": 2048, 00:32:16.269 "data_size": 63488 00:32:16.269 }, 00:32:16.269 { 00:32:16.269 "name": "BaseBdev3", 00:32:16.269 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:16.269 "is_configured": true, 00:32:16.269 "data_offset": 2048, 00:32:16.269 "data_size": 63488 00:32:16.269 } 00:32:16.269 ] 00:32:16.269 }' 00:32:16.269 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:16.269 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:16.836 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:16.836 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.836 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:16.836 [2024-10-28 13:43:30.691394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:16.836 [2024-10-28 13:43:30.691649] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:16.836 [2024-10-28 13:43:30.691679] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:16.836 [2024-10-28 13:43:30.691735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:16.836 [2024-10-28 13:43:30.698390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047be0 00:32:16.836 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.836 13:43:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:32:16.836 [2024-10-28 13:43:30.701490] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:17.771 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:17.771 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:17.771 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:17.771 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:17.771 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:17.771 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:17.772 "name": "raid_bdev1", 00:32:17.772 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:17.772 "strip_size_kb": 64, 00:32:17.772 "state": "online", 00:32:17.772 "raid_level": "raid5f", 00:32:17.772 "superblock": true, 00:32:17.772 "num_base_bdevs": 3, 00:32:17.772 "num_base_bdevs_discovered": 3, 00:32:17.772 "num_base_bdevs_operational": 3, 00:32:17.772 "process": { 00:32:17.772 "type": "rebuild", 00:32:17.772 "target": "spare", 00:32:17.772 "progress": { 00:32:17.772 "blocks": 20480, 00:32:17.772 "percent": 16 00:32:17.772 } 00:32:17.772 }, 00:32:17.772 "base_bdevs_list": [ 00:32:17.772 { 00:32:17.772 "name": "spare", 00:32:17.772 "uuid": "9ec10b2b-c75c-558d-af6c-718108b8337e", 00:32:17.772 "is_configured": true, 00:32:17.772 "data_offset": 2048, 00:32:17.772 "data_size": 63488 00:32:17.772 }, 00:32:17.772 { 00:32:17.772 "name": "BaseBdev2", 00:32:17.772 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:17.772 "is_configured": true, 00:32:17.772 "data_offset": 2048, 00:32:17.772 "data_size": 63488 00:32:17.772 }, 00:32:17.772 { 00:32:17.772 "name": "BaseBdev3", 00:32:17.772 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:17.772 "is_configured": true, 00:32:17.772 "data_offset": 2048, 00:32:17.772 "data_size": 63488 00:32:17.772 } 00:32:17.772 ] 00:32:17.772 }' 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:17.772 [2024-10-28 13:43:31.871127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:17.772 [2024-10-28 13:43:31.914687] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:17.772 [2024-10-28 13:43:31.914779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:17.772 [2024-10-28 13:43:31.914804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:17.772 [2024-10-28 13:43:31.914839] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:17.772 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:18.031 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:18.031 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.031 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:18.031 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:18.031 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.031 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:18.031 "name": "raid_bdev1", 00:32:18.031 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:18.031 "strip_size_kb": 64, 00:32:18.031 "state": "online", 00:32:18.031 "raid_level": "raid5f", 00:32:18.031 "superblock": true, 00:32:18.031 "num_base_bdevs": 3, 00:32:18.031 "num_base_bdevs_discovered": 2, 00:32:18.031 "num_base_bdevs_operational": 2, 00:32:18.031 "base_bdevs_list": [ 00:32:18.031 { 00:32:18.031 "name": null, 00:32:18.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.031 "is_configured": false, 00:32:18.031 "data_offset": 0, 00:32:18.031 "data_size": 63488 00:32:18.031 }, 00:32:18.031 { 00:32:18.031 "name": "BaseBdev2", 00:32:18.031 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:18.031 "is_configured": true, 00:32:18.031 "data_offset": 2048, 00:32:18.031 "data_size": 63488 00:32:18.031 }, 00:32:18.031 { 00:32:18.031 "name": "BaseBdev3", 00:32:18.031 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:18.031 "is_configured": true, 00:32:18.031 "data_offset": 2048, 00:32:18.031 "data_size": 63488 00:32:18.031 } 00:32:18.031 ] 00:32:18.031 }' 00:32:18.031 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:18.031 13:43:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:18.288 13:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:18.288 13:43:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.288 13:43:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:18.554 [2024-10-28 13:43:32.449475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:18.554 [2024-10-28 13:43:32.449713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:18.554 [2024-10-28 13:43:32.449907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:32:18.554 [2024-10-28 13:43:32.450057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:18.554 [2024-10-28 13:43:32.450747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:18.554 [2024-10-28 13:43:32.450909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:18.554 [2024-10-28 13:43:32.451195] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:18.554 [2024-10-28 13:43:32.451229] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:18.554 [2024-10-28 13:43:32.451245] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:18.554 [2024-10-28 13:43:32.451298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:18.554 [2024-10-28 13:43:32.457733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047cb0 00:32:18.554 spare 00:32:18.554 13:43:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.554 13:43:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:32:18.554 [2024-10-28 13:43:32.460630] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:19.498 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:19.498 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:19.498 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:19.498 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:19.498 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:19.498 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:19.498 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:19.498 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.498 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.498 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.498 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:19.498 "name": "raid_bdev1", 00:32:19.498 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:19.498 "strip_size_kb": 64, 00:32:19.498 "state": "online", 00:32:19.498 "raid_level": "raid5f", 00:32:19.499 "superblock": true, 00:32:19.499 "num_base_bdevs": 3, 00:32:19.499 "num_base_bdevs_discovered": 3, 00:32:19.499 "num_base_bdevs_operational": 3, 00:32:19.499 "process": { 00:32:19.499 "type": "rebuild", 00:32:19.499 "target": "spare", 00:32:19.499 "progress": { 00:32:19.499 "blocks": 20480, 00:32:19.499 "percent": 16 00:32:19.499 } 00:32:19.499 }, 00:32:19.499 "base_bdevs_list": [ 00:32:19.499 { 00:32:19.499 "name": "spare", 00:32:19.499 "uuid": "9ec10b2b-c75c-558d-af6c-718108b8337e", 00:32:19.499 "is_configured": true, 00:32:19.499 "data_offset": 2048, 00:32:19.499 "data_size": 63488 00:32:19.499 }, 00:32:19.499 { 00:32:19.499 "name": "BaseBdev2", 00:32:19.499 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:19.499 "is_configured": true, 00:32:19.499 "data_offset": 2048, 00:32:19.499 "data_size": 63488 00:32:19.499 }, 00:32:19.499 { 00:32:19.499 "name": "BaseBdev3", 00:32:19.499 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:19.499 "is_configured": true, 00:32:19.499 "data_offset": 2048, 00:32:19.499 "data_size": 63488 00:32:19.499 } 00:32:19.499 ] 00:32:19.499 }' 00:32:19.499 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:19.499 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:19.499 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:19.499 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:19.499 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:32:19.499 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.499 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.499 [2024-10-28 13:43:33.622621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:19.756 [2024-10-28 13:43:33.673543] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:19.756 [2024-10-28 13:43:33.673802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:19.756 [2024-10-28 13:43:33.673951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:19.756 [2024-10-28 13:43:33.674002] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:19.756 "name": "raid_bdev1", 00:32:19.756 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:19.756 "strip_size_kb": 64, 00:32:19.756 "state": "online", 00:32:19.756 "raid_level": "raid5f", 00:32:19.756 "superblock": true, 00:32:19.756 "num_base_bdevs": 3, 00:32:19.756 "num_base_bdevs_discovered": 2, 00:32:19.756 "num_base_bdevs_operational": 2, 00:32:19.756 "base_bdevs_list": [ 00:32:19.756 { 00:32:19.756 "name": null, 00:32:19.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.756 "is_configured": false, 00:32:19.756 "data_offset": 0, 00:32:19.756 "data_size": 63488 00:32:19.756 }, 00:32:19.756 { 00:32:19.756 "name": "BaseBdev2", 00:32:19.756 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:19.756 "is_configured": true, 00:32:19.756 "data_offset": 2048, 00:32:19.756 "data_size": 63488 00:32:19.756 }, 00:32:19.756 { 00:32:19.756 "name": "BaseBdev3", 00:32:19.756 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:19.756 "is_configured": true, 00:32:19.756 "data_offset": 2048, 00:32:19.756 "data_size": 63488 00:32:19.756 } 00:32:19.756 ] 00:32:19.756 }' 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:19.756 13:43:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:20.323 "name": "raid_bdev1", 00:32:20.323 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:20.323 "strip_size_kb": 64, 00:32:20.323 "state": "online", 00:32:20.323 "raid_level": "raid5f", 00:32:20.323 "superblock": true, 00:32:20.323 "num_base_bdevs": 3, 00:32:20.323 "num_base_bdevs_discovered": 2, 00:32:20.323 "num_base_bdevs_operational": 2, 00:32:20.323 "base_bdevs_list": [ 00:32:20.323 { 00:32:20.323 "name": null, 00:32:20.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.323 "is_configured": false, 00:32:20.323 "data_offset": 0, 00:32:20.323 "data_size": 63488 00:32:20.323 }, 00:32:20.323 { 00:32:20.323 "name": "BaseBdev2", 00:32:20.323 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:20.323 "is_configured": true, 00:32:20.323 "data_offset": 2048, 00:32:20.323 "data_size": 63488 00:32:20.323 }, 00:32:20.323 { 00:32:20.323 "name": "BaseBdev3", 00:32:20.323 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:20.323 "is_configured": true, 00:32:20.323 "data_offset": 2048, 00:32:20.323 "data_size": 63488 00:32:20.323 } 00:32:20.323 ] 00:32:20.323 }' 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:20.323 [2024-10-28 13:43:34.392999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:20.323 [2024-10-28 13:43:34.393063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:20.323 [2024-10-28 13:43:34.393095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:32:20.323 [2024-10-28 13:43:34.393110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:20.323 [2024-10-28 13:43:34.393682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:20.323 [2024-10-28 13:43:34.393724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:20.323 [2024-10-28 13:43:34.393835] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:20.323 [2024-10-28 13:43:34.393872] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:20.323 [2024-10-28 13:43:34.393885] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:20.323 [2024-10-28 13:43:34.393897] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:32:20.323 BaseBdev1 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.323 13:43:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:32:21.259 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:21.259 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:21.259 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:21.259 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:21.259 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:21.259 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:21.259 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:21.259 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:21.259 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:21.259 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:21.259 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:21.259 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:21.259 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.259 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.518 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.518 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:21.518 "name": "raid_bdev1", 00:32:21.518 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:21.518 "strip_size_kb": 64, 00:32:21.518 "state": "online", 00:32:21.518 "raid_level": "raid5f", 00:32:21.518 "superblock": true, 00:32:21.518 "num_base_bdevs": 3, 00:32:21.518 "num_base_bdevs_discovered": 2, 00:32:21.518 "num_base_bdevs_operational": 2, 00:32:21.518 "base_bdevs_list": [ 00:32:21.518 { 00:32:21.518 "name": null, 00:32:21.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.518 "is_configured": false, 00:32:21.518 "data_offset": 0, 00:32:21.518 "data_size": 63488 00:32:21.518 }, 00:32:21.518 { 00:32:21.518 "name": "BaseBdev2", 00:32:21.518 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:21.518 "is_configured": true, 00:32:21.518 "data_offset": 2048, 00:32:21.518 "data_size": 63488 00:32:21.518 }, 00:32:21.518 { 00:32:21.518 "name": "BaseBdev3", 00:32:21.518 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:21.518 "is_configured": true, 00:32:21.518 "data_offset": 2048, 00:32:21.518 "data_size": 63488 00:32:21.518 } 00:32:21.518 ] 00:32:21.518 }' 00:32:21.518 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:21.518 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.777 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:21.778 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:21.778 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:21.778 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:21.778 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:21.778 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:21.778 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.778 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:21.778 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:22.036 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.036 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:22.036 "name": "raid_bdev1", 00:32:22.036 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:22.036 "strip_size_kb": 64, 00:32:22.036 "state": "online", 00:32:22.036 "raid_level": "raid5f", 00:32:22.036 "superblock": true, 00:32:22.036 "num_base_bdevs": 3, 00:32:22.036 "num_base_bdevs_discovered": 2, 00:32:22.036 "num_base_bdevs_operational": 2, 00:32:22.036 "base_bdevs_list": [ 00:32:22.036 { 00:32:22.036 "name": null, 00:32:22.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.036 "is_configured": false, 00:32:22.036 "data_offset": 0, 00:32:22.036 "data_size": 63488 00:32:22.036 }, 00:32:22.036 { 00:32:22.036 "name": "BaseBdev2", 00:32:22.036 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:22.036 "is_configured": true, 00:32:22.036 "data_offset": 2048, 00:32:22.036 "data_size": 63488 00:32:22.037 }, 00:32:22.037 { 00:32:22.037 "name": "BaseBdev3", 00:32:22.037 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:22.037 "is_configured": true, 00:32:22.037 "data_offset": 2048, 00:32:22.037 "data_size": 63488 00:32:22.037 } 00:32:22.037 ] 00:32:22.037 }' 00:32:22.037 13:43:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.037 [2024-10-28 13:43:36.089602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:22.037 [2024-10-28 13:43:36.089826] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:22.037 [2024-10-28 13:43:36.089849] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:22.037 request: 00:32:22.037 { 00:32:22.037 "base_bdev": "BaseBdev1", 00:32:22.037 "raid_bdev": "raid_bdev1", 00:32:22.037 "method": "bdev_raid_add_base_bdev", 00:32:22.037 "req_id": 1 00:32:22.037 } 00:32:22.037 Got JSON-RPC error response 00:32:22.037 response: 00:32:22.037 { 00:32:22.037 "code": -22, 00:32:22.037 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:22.037 } 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:22.037 13:43:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:22.973 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.232 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:23.232 "name": "raid_bdev1", 00:32:23.232 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:23.232 "strip_size_kb": 64, 00:32:23.232 "state": "online", 00:32:23.232 "raid_level": "raid5f", 00:32:23.232 "superblock": true, 00:32:23.232 "num_base_bdevs": 3, 00:32:23.232 "num_base_bdevs_discovered": 2, 00:32:23.232 "num_base_bdevs_operational": 2, 00:32:23.232 "base_bdevs_list": [ 00:32:23.232 { 00:32:23.232 "name": null, 00:32:23.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.232 "is_configured": false, 00:32:23.232 "data_offset": 0, 00:32:23.232 "data_size": 63488 00:32:23.232 }, 00:32:23.232 { 00:32:23.232 "name": "BaseBdev2", 00:32:23.232 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:23.232 "is_configured": true, 00:32:23.232 "data_offset": 2048, 00:32:23.232 "data_size": 63488 00:32:23.232 }, 00:32:23.232 { 00:32:23.232 "name": "BaseBdev3", 00:32:23.232 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:23.232 "is_configured": true, 00:32:23.232 "data_offset": 2048, 00:32:23.232 "data_size": 63488 00:32:23.232 } 00:32:23.232 ] 00:32:23.232 }' 00:32:23.232 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:23.232 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:23.491 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:23.491 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:23.491 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:23.491 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:23.491 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:23.491 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:23.491 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:23.491 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.491 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:23.751 "name": "raid_bdev1", 00:32:23.751 "uuid": "e9309677-50f2-48c3-93ee-75cbb12cc192", 00:32:23.751 "strip_size_kb": 64, 00:32:23.751 "state": "online", 00:32:23.751 "raid_level": "raid5f", 00:32:23.751 "superblock": true, 00:32:23.751 "num_base_bdevs": 3, 00:32:23.751 "num_base_bdevs_discovered": 2, 00:32:23.751 "num_base_bdevs_operational": 2, 00:32:23.751 "base_bdevs_list": [ 00:32:23.751 { 00:32:23.751 "name": null, 00:32:23.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.751 "is_configured": false, 00:32:23.751 "data_offset": 0, 00:32:23.751 "data_size": 63488 00:32:23.751 }, 00:32:23.751 { 00:32:23.751 "name": "BaseBdev2", 00:32:23.751 "uuid": "f6604252-7c84-5c7e-909a-c7990074ce0b", 00:32:23.751 "is_configured": true, 00:32:23.751 "data_offset": 2048, 00:32:23.751 "data_size": 63488 00:32:23.751 }, 00:32:23.751 { 00:32:23.751 "name": "BaseBdev3", 00:32:23.751 "uuid": "76484dd4-ef1a-56f8-a153-979c9ed10025", 00:32:23.751 "is_configured": true, 00:32:23.751 "data_offset": 2048, 00:32:23.751 "data_size": 63488 00:32:23.751 } 00:32:23.751 ] 00:32:23.751 }' 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 94761 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 94761 ']' 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 94761 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94761 00:32:23.751 killing process with pid 94761 00:32:23.751 Received shutdown signal, test time was about 60.000000 seconds 00:32:23.751 00:32:23.751 Latency(us) 00:32:23.751 [2024-10-28T13:43:37.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.751 [2024-10-28T13:43:37.911Z] =================================================================================================================== 00:32:23.751 [2024-10-28T13:43:37.911Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94761' 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 94761 00:32:23.751 [2024-10-28 13:43:37.837086] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:23.751 13:43:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 94761 00:32:23.751 [2024-10-28 13:43:37.837281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:23.751 [2024-10-28 13:43:37.837368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:23.751 [2024-10-28 13:43:37.837388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:23.751 [2024-10-28 13:43:37.878046] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:24.013 13:43:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:32:24.013 00:32:24.013 real 0m23.550s 00:32:24.013 user 0m32.010s 00:32:24.013 sys 0m2.549s 00:32:24.013 ************************************ 00:32:24.013 END TEST raid5f_rebuild_test_sb 00:32:24.013 ************************************ 00:32:24.013 13:43:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:24.013 13:43:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:24.013 13:43:38 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:32:24.013 13:43:38 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:32:24.321 13:43:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:24.321 13:43:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:24.321 13:43:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:24.321 ************************************ 00:32:24.321 START TEST raid5f_state_function_test 00:32:24.321 ************************************ 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=95508 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95508' 00:32:24.321 Process raid pid: 95508 00:32:24.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 95508 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 95508 ']' 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:24.321 13:43:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.321 [2024-10-28 13:43:38.297515] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:32:24.321 [2024-10-28 13:43:38.297693] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:24.321 [2024-10-28 13:43:38.453318] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:24.581 [2024-10-28 13:43:38.485944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.581 [2024-10-28 13:43:38.532190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.581 [2024-10-28 13:43:38.594289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:24.581 [2024-10-28 13:43:38.594558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.149 [2024-10-28 13:43:39.299400] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:25.149 [2024-10-28 13:43:39.299498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:25.149 [2024-10-28 13:43:39.299526] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:25.149 [2024-10-28 13:43:39.299541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:25.149 [2024-10-28 13:43:39.299557] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:25.149 [2024-10-28 13:43:39.299569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:25.149 [2024-10-28 13:43:39.299581] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:25.149 [2024-10-28 13:43:39.299593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:25.149 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:25.408 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:25.408 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.408 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.408 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.408 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.408 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:25.408 "name": "Existed_Raid", 00:32:25.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.408 "strip_size_kb": 64, 00:32:25.408 "state": "configuring", 00:32:25.408 "raid_level": "raid5f", 00:32:25.408 "superblock": false, 00:32:25.408 "num_base_bdevs": 4, 00:32:25.408 "num_base_bdevs_discovered": 0, 00:32:25.408 "num_base_bdevs_operational": 4, 00:32:25.408 "base_bdevs_list": [ 00:32:25.408 { 00:32:25.408 "name": "BaseBdev1", 00:32:25.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.408 "is_configured": false, 00:32:25.408 "data_offset": 0, 00:32:25.408 "data_size": 0 00:32:25.408 }, 00:32:25.408 { 00:32:25.408 "name": "BaseBdev2", 00:32:25.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.408 "is_configured": false, 00:32:25.408 "data_offset": 0, 00:32:25.408 "data_size": 0 00:32:25.408 }, 00:32:25.408 { 00:32:25.408 "name": "BaseBdev3", 00:32:25.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.408 "is_configured": false, 00:32:25.408 "data_offset": 0, 00:32:25.408 "data_size": 0 00:32:25.408 }, 00:32:25.408 { 00:32:25.408 "name": "BaseBdev4", 00:32:25.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.408 "is_configured": false, 00:32:25.408 "data_offset": 0, 00:32:25.408 "data_size": 0 00:32:25.408 } 00:32:25.408 ] 00:32:25.408 }' 00:32:25.408 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:25.408 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.978 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:25.978 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.978 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.978 [2024-10-28 13:43:39.839482] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:25.978 [2024-10-28 13:43:39.839528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:32:25.978 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.978 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:25.978 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.978 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.978 [2024-10-28 13:43:39.851494] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:25.978 [2024-10-28 13:43:39.851673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:25.978 [2024-10-28 13:43:39.851821] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:25.978 [2024-10-28 13:43:39.851904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:25.978 [2024-10-28 13:43:39.852174] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:25.978 [2024-10-28 13:43:39.852252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:25.978 [2024-10-28 13:43:39.852466] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:25.978 [2024-10-28 13:43:39.852540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:25.978 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.979 [2024-10-28 13:43:39.872328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:25.979 BaseBdev1 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.979 [ 00:32:25.979 { 00:32:25.979 "name": "BaseBdev1", 00:32:25.979 "aliases": [ 00:32:25.979 "37e67480-e4bf-417d-8f2b-98efd92089e3" 00:32:25.979 ], 00:32:25.979 "product_name": "Malloc disk", 00:32:25.979 "block_size": 512, 00:32:25.979 "num_blocks": 65536, 00:32:25.979 "uuid": "37e67480-e4bf-417d-8f2b-98efd92089e3", 00:32:25.979 "assigned_rate_limits": { 00:32:25.979 "rw_ios_per_sec": 0, 00:32:25.979 "rw_mbytes_per_sec": 0, 00:32:25.979 "r_mbytes_per_sec": 0, 00:32:25.979 "w_mbytes_per_sec": 0 00:32:25.979 }, 00:32:25.979 "claimed": true, 00:32:25.979 "claim_type": "exclusive_write", 00:32:25.979 "zoned": false, 00:32:25.979 "supported_io_types": { 00:32:25.979 "read": true, 00:32:25.979 "write": true, 00:32:25.979 "unmap": true, 00:32:25.979 "flush": true, 00:32:25.979 "reset": true, 00:32:25.979 "nvme_admin": false, 00:32:25.979 "nvme_io": false, 00:32:25.979 "nvme_io_md": false, 00:32:25.979 "write_zeroes": true, 00:32:25.979 "zcopy": true, 00:32:25.979 "get_zone_info": false, 00:32:25.979 "zone_management": false, 00:32:25.979 "zone_append": false, 00:32:25.979 "compare": false, 00:32:25.979 "compare_and_write": false, 00:32:25.979 "abort": true, 00:32:25.979 "seek_hole": false, 00:32:25.979 "seek_data": false, 00:32:25.979 "copy": true, 00:32:25.979 "nvme_iov_md": false 00:32:25.979 }, 00:32:25.979 "memory_domains": [ 00:32:25.979 { 00:32:25.979 "dma_device_id": "system", 00:32:25.979 "dma_device_type": 1 00:32:25.979 }, 00:32:25.979 { 00:32:25.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:25.979 "dma_device_type": 2 00:32:25.979 } 00:32:25.979 ], 00:32:25.979 "driver_specific": {} 00:32:25.979 } 00:32:25.979 ] 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:25.979 "name": "Existed_Raid", 00:32:25.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.979 "strip_size_kb": 64, 00:32:25.979 "state": "configuring", 00:32:25.979 "raid_level": "raid5f", 00:32:25.979 "superblock": false, 00:32:25.979 "num_base_bdevs": 4, 00:32:25.979 "num_base_bdevs_discovered": 1, 00:32:25.979 "num_base_bdevs_operational": 4, 00:32:25.979 "base_bdevs_list": [ 00:32:25.979 { 00:32:25.979 "name": "BaseBdev1", 00:32:25.979 "uuid": "37e67480-e4bf-417d-8f2b-98efd92089e3", 00:32:25.979 "is_configured": true, 00:32:25.979 "data_offset": 0, 00:32:25.979 "data_size": 65536 00:32:25.979 }, 00:32:25.979 { 00:32:25.979 "name": "BaseBdev2", 00:32:25.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.979 "is_configured": false, 00:32:25.979 "data_offset": 0, 00:32:25.979 "data_size": 0 00:32:25.979 }, 00:32:25.979 { 00:32:25.979 "name": "BaseBdev3", 00:32:25.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.979 "is_configured": false, 00:32:25.979 "data_offset": 0, 00:32:25.979 "data_size": 0 00:32:25.979 }, 00:32:25.979 { 00:32:25.979 "name": "BaseBdev4", 00:32:25.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.979 "is_configured": false, 00:32:25.979 "data_offset": 0, 00:32:25.979 "data_size": 0 00:32:25.979 } 00:32:25.979 ] 00:32:25.979 }' 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:25.979 13:43:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.547 [2024-10-28 13:43:40.453001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:26.547 [2024-10-28 13:43:40.453076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.547 [2024-10-28 13:43:40.461046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:26.547 [2024-10-28 13:43:40.463691] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:26.547 [2024-10-28 13:43:40.463961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:26.547 [2024-10-28 13:43:40.464018] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:26.547 [2024-10-28 13:43:40.464041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:26.547 [2024-10-28 13:43:40.464060] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:26.547 [2024-10-28 13:43:40.464079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:26.547 "name": "Existed_Raid", 00:32:26.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.547 "strip_size_kb": 64, 00:32:26.547 "state": "configuring", 00:32:26.547 "raid_level": "raid5f", 00:32:26.547 "superblock": false, 00:32:26.547 "num_base_bdevs": 4, 00:32:26.547 "num_base_bdevs_discovered": 1, 00:32:26.547 "num_base_bdevs_operational": 4, 00:32:26.547 "base_bdevs_list": [ 00:32:26.547 { 00:32:26.547 "name": "BaseBdev1", 00:32:26.547 "uuid": "37e67480-e4bf-417d-8f2b-98efd92089e3", 00:32:26.547 "is_configured": true, 00:32:26.547 "data_offset": 0, 00:32:26.547 "data_size": 65536 00:32:26.547 }, 00:32:26.547 { 00:32:26.547 "name": "BaseBdev2", 00:32:26.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.547 "is_configured": false, 00:32:26.547 "data_offset": 0, 00:32:26.547 "data_size": 0 00:32:26.547 }, 00:32:26.547 { 00:32:26.547 "name": "BaseBdev3", 00:32:26.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.547 "is_configured": false, 00:32:26.547 "data_offset": 0, 00:32:26.547 "data_size": 0 00:32:26.547 }, 00:32:26.547 { 00:32:26.547 "name": "BaseBdev4", 00:32:26.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.547 "is_configured": false, 00:32:26.547 "data_offset": 0, 00:32:26.547 "data_size": 0 00:32:26.547 } 00:32:26.547 ] 00:32:26.547 }' 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:26.547 13:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.115 13:43:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:27.115 13:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.115 13:43:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.115 [2024-10-28 13:43:41.016128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:27.115 BaseBdev2 00:32:27.115 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.115 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:27.115 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:27.115 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:27.115 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:27.115 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:27.115 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:27.115 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:27.115 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.115 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.115 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.115 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.116 [ 00:32:27.116 { 00:32:27.116 "name": "BaseBdev2", 00:32:27.116 "aliases": [ 00:32:27.116 "0e9b5ae9-33d5-45f6-8aeb-b36537c37f66" 00:32:27.116 ], 00:32:27.116 "product_name": "Malloc disk", 00:32:27.116 "block_size": 512, 00:32:27.116 "num_blocks": 65536, 00:32:27.116 "uuid": "0e9b5ae9-33d5-45f6-8aeb-b36537c37f66", 00:32:27.116 "assigned_rate_limits": { 00:32:27.116 "rw_ios_per_sec": 0, 00:32:27.116 "rw_mbytes_per_sec": 0, 00:32:27.116 "r_mbytes_per_sec": 0, 00:32:27.116 "w_mbytes_per_sec": 0 00:32:27.116 }, 00:32:27.116 "claimed": true, 00:32:27.116 "claim_type": "exclusive_write", 00:32:27.116 "zoned": false, 00:32:27.116 "supported_io_types": { 00:32:27.116 "read": true, 00:32:27.116 "write": true, 00:32:27.116 "unmap": true, 00:32:27.116 "flush": true, 00:32:27.116 "reset": true, 00:32:27.116 "nvme_admin": false, 00:32:27.116 "nvme_io": false, 00:32:27.116 "nvme_io_md": false, 00:32:27.116 "write_zeroes": true, 00:32:27.116 "zcopy": true, 00:32:27.116 "get_zone_info": false, 00:32:27.116 "zone_management": false, 00:32:27.116 "zone_append": false, 00:32:27.116 "compare": false, 00:32:27.116 "compare_and_write": false, 00:32:27.116 "abort": true, 00:32:27.116 "seek_hole": false, 00:32:27.116 "seek_data": false, 00:32:27.116 "copy": true, 00:32:27.116 "nvme_iov_md": false 00:32:27.116 }, 00:32:27.116 "memory_domains": [ 00:32:27.116 { 00:32:27.116 "dma_device_id": "system", 00:32:27.116 "dma_device_type": 1 00:32:27.116 }, 00:32:27.116 { 00:32:27.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.116 "dma_device_type": 2 00:32:27.116 } 00:32:27.116 ], 00:32:27.116 "driver_specific": {} 00:32:27.116 } 00:32:27.116 ] 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:27.116 "name": "Existed_Raid", 00:32:27.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.116 "strip_size_kb": 64, 00:32:27.116 "state": "configuring", 00:32:27.116 "raid_level": "raid5f", 00:32:27.116 "superblock": false, 00:32:27.116 "num_base_bdevs": 4, 00:32:27.116 "num_base_bdevs_discovered": 2, 00:32:27.116 "num_base_bdevs_operational": 4, 00:32:27.116 "base_bdevs_list": [ 00:32:27.116 { 00:32:27.116 "name": "BaseBdev1", 00:32:27.116 "uuid": "37e67480-e4bf-417d-8f2b-98efd92089e3", 00:32:27.116 "is_configured": true, 00:32:27.116 "data_offset": 0, 00:32:27.116 "data_size": 65536 00:32:27.116 }, 00:32:27.116 { 00:32:27.116 "name": "BaseBdev2", 00:32:27.116 "uuid": "0e9b5ae9-33d5-45f6-8aeb-b36537c37f66", 00:32:27.116 "is_configured": true, 00:32:27.116 "data_offset": 0, 00:32:27.116 "data_size": 65536 00:32:27.116 }, 00:32:27.116 { 00:32:27.116 "name": "BaseBdev3", 00:32:27.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.116 "is_configured": false, 00:32:27.116 "data_offset": 0, 00:32:27.116 "data_size": 0 00:32:27.116 }, 00:32:27.116 { 00:32:27.116 "name": "BaseBdev4", 00:32:27.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.116 "is_configured": false, 00:32:27.116 "data_offset": 0, 00:32:27.116 "data_size": 0 00:32:27.116 } 00:32:27.116 ] 00:32:27.116 }' 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:27.116 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.684 [2024-10-28 13:43:41.625300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:27.684 BaseBdev3 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.684 [ 00:32:27.684 { 00:32:27.684 "name": "BaseBdev3", 00:32:27.684 "aliases": [ 00:32:27.684 "3090f9ee-6701-4f24-9c24-b9d3422f1489" 00:32:27.684 ], 00:32:27.684 "product_name": "Malloc disk", 00:32:27.684 "block_size": 512, 00:32:27.684 "num_blocks": 65536, 00:32:27.684 "uuid": "3090f9ee-6701-4f24-9c24-b9d3422f1489", 00:32:27.684 "assigned_rate_limits": { 00:32:27.684 "rw_ios_per_sec": 0, 00:32:27.684 "rw_mbytes_per_sec": 0, 00:32:27.684 "r_mbytes_per_sec": 0, 00:32:27.684 "w_mbytes_per_sec": 0 00:32:27.684 }, 00:32:27.684 "claimed": true, 00:32:27.684 "claim_type": "exclusive_write", 00:32:27.684 "zoned": false, 00:32:27.684 "supported_io_types": { 00:32:27.684 "read": true, 00:32:27.684 "write": true, 00:32:27.684 "unmap": true, 00:32:27.684 "flush": true, 00:32:27.684 "reset": true, 00:32:27.684 "nvme_admin": false, 00:32:27.684 "nvme_io": false, 00:32:27.684 "nvme_io_md": false, 00:32:27.684 "write_zeroes": true, 00:32:27.684 "zcopy": true, 00:32:27.684 "get_zone_info": false, 00:32:27.684 "zone_management": false, 00:32:27.684 "zone_append": false, 00:32:27.684 "compare": false, 00:32:27.684 "compare_and_write": false, 00:32:27.684 "abort": true, 00:32:27.684 "seek_hole": false, 00:32:27.684 "seek_data": false, 00:32:27.684 "copy": true, 00:32:27.684 "nvme_iov_md": false 00:32:27.684 }, 00:32:27.684 "memory_domains": [ 00:32:27.684 { 00:32:27.684 "dma_device_id": "system", 00:32:27.684 "dma_device_type": 1 00:32:27.684 }, 00:32:27.684 { 00:32:27.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.684 "dma_device_type": 2 00:32:27.684 } 00:32:27.684 ], 00:32:27.684 "driver_specific": {} 00:32:27.684 } 00:32:27.684 ] 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:27.684 "name": "Existed_Raid", 00:32:27.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.684 "strip_size_kb": 64, 00:32:27.684 "state": "configuring", 00:32:27.684 "raid_level": "raid5f", 00:32:27.684 "superblock": false, 00:32:27.684 "num_base_bdevs": 4, 00:32:27.684 "num_base_bdevs_discovered": 3, 00:32:27.684 "num_base_bdevs_operational": 4, 00:32:27.684 "base_bdevs_list": [ 00:32:27.684 { 00:32:27.684 "name": "BaseBdev1", 00:32:27.684 "uuid": "37e67480-e4bf-417d-8f2b-98efd92089e3", 00:32:27.684 "is_configured": true, 00:32:27.684 "data_offset": 0, 00:32:27.684 "data_size": 65536 00:32:27.684 }, 00:32:27.684 { 00:32:27.684 "name": "BaseBdev2", 00:32:27.684 "uuid": "0e9b5ae9-33d5-45f6-8aeb-b36537c37f66", 00:32:27.684 "is_configured": true, 00:32:27.684 "data_offset": 0, 00:32:27.684 "data_size": 65536 00:32:27.684 }, 00:32:27.684 { 00:32:27.684 "name": "BaseBdev3", 00:32:27.684 "uuid": "3090f9ee-6701-4f24-9c24-b9d3422f1489", 00:32:27.684 "is_configured": true, 00:32:27.684 "data_offset": 0, 00:32:27.684 "data_size": 65536 00:32:27.684 }, 00:32:27.684 { 00:32:27.684 "name": "BaseBdev4", 00:32:27.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.684 "is_configured": false, 00:32:27.684 "data_offset": 0, 00:32:27.684 "data_size": 0 00:32:27.684 } 00:32:27.684 ] 00:32:27.684 }' 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:27.684 13:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.252 [2024-10-28 13:43:42.207566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:28.252 [2024-10-28 13:43:42.207629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:32:28.252 [2024-10-28 13:43:42.207648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:28.252 [2024-10-28 13:43:42.208012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:32:28.252 [2024-10-28 13:43:42.208595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:32:28.252 [2024-10-28 13:43:42.208612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:32:28.252 [2024-10-28 13:43:42.208856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:28.252 BaseBdev4 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.252 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.252 [ 00:32:28.252 { 00:32:28.252 "name": "BaseBdev4", 00:32:28.252 "aliases": [ 00:32:28.252 "16be79ea-cd8c-4b11-a718-1545f547c74f" 00:32:28.252 ], 00:32:28.252 "product_name": "Malloc disk", 00:32:28.252 "block_size": 512, 00:32:28.252 "num_blocks": 65536, 00:32:28.252 "uuid": "16be79ea-cd8c-4b11-a718-1545f547c74f", 00:32:28.252 "assigned_rate_limits": { 00:32:28.252 "rw_ios_per_sec": 0, 00:32:28.252 "rw_mbytes_per_sec": 0, 00:32:28.252 "r_mbytes_per_sec": 0, 00:32:28.252 "w_mbytes_per_sec": 0 00:32:28.252 }, 00:32:28.252 "claimed": true, 00:32:28.252 "claim_type": "exclusive_write", 00:32:28.252 "zoned": false, 00:32:28.252 "supported_io_types": { 00:32:28.252 "read": true, 00:32:28.252 "write": true, 00:32:28.252 "unmap": true, 00:32:28.252 "flush": true, 00:32:28.252 "reset": true, 00:32:28.252 "nvme_admin": false, 00:32:28.252 "nvme_io": false, 00:32:28.252 "nvme_io_md": false, 00:32:28.252 "write_zeroes": true, 00:32:28.252 "zcopy": true, 00:32:28.252 "get_zone_info": false, 00:32:28.252 "zone_management": false, 00:32:28.252 "zone_append": false, 00:32:28.252 "compare": false, 00:32:28.252 "compare_and_write": false, 00:32:28.252 "abort": true, 00:32:28.252 "seek_hole": false, 00:32:28.252 "seek_data": false, 00:32:28.252 "copy": true, 00:32:28.252 "nvme_iov_md": false 00:32:28.252 }, 00:32:28.252 "memory_domains": [ 00:32:28.252 { 00:32:28.252 "dma_device_id": "system", 00:32:28.252 "dma_device_type": 1 00:32:28.252 }, 00:32:28.252 { 00:32:28.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:28.252 "dma_device_type": 2 00:32:28.252 } 00:32:28.252 ], 00:32:28.252 "driver_specific": {} 00:32:28.252 } 00:32:28.252 ] 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:28.253 "name": "Existed_Raid", 00:32:28.253 "uuid": "e078326f-f701-4bb5-a727-db4a9b625bf7", 00:32:28.253 "strip_size_kb": 64, 00:32:28.253 "state": "online", 00:32:28.253 "raid_level": "raid5f", 00:32:28.253 "superblock": false, 00:32:28.253 "num_base_bdevs": 4, 00:32:28.253 "num_base_bdevs_discovered": 4, 00:32:28.253 "num_base_bdevs_operational": 4, 00:32:28.253 "base_bdevs_list": [ 00:32:28.253 { 00:32:28.253 "name": "BaseBdev1", 00:32:28.253 "uuid": "37e67480-e4bf-417d-8f2b-98efd92089e3", 00:32:28.253 "is_configured": true, 00:32:28.253 "data_offset": 0, 00:32:28.253 "data_size": 65536 00:32:28.253 }, 00:32:28.253 { 00:32:28.253 "name": "BaseBdev2", 00:32:28.253 "uuid": "0e9b5ae9-33d5-45f6-8aeb-b36537c37f66", 00:32:28.253 "is_configured": true, 00:32:28.253 "data_offset": 0, 00:32:28.253 "data_size": 65536 00:32:28.253 }, 00:32:28.253 { 00:32:28.253 "name": "BaseBdev3", 00:32:28.253 "uuid": "3090f9ee-6701-4f24-9c24-b9d3422f1489", 00:32:28.253 "is_configured": true, 00:32:28.253 "data_offset": 0, 00:32:28.253 "data_size": 65536 00:32:28.253 }, 00:32:28.253 { 00:32:28.253 "name": "BaseBdev4", 00:32:28.253 "uuid": "16be79ea-cd8c-4b11-a718-1545f547c74f", 00:32:28.253 "is_configured": true, 00:32:28.253 "data_offset": 0, 00:32:28.253 "data_size": 65536 00:32:28.253 } 00:32:28.253 ] 00:32:28.253 }' 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:28.253 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.820 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:28.820 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:28.820 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:28.820 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:28.820 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:28.820 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:28.820 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:28.820 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:28.820 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.820 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.820 [2024-10-28 13:43:42.788054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:28.820 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.820 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:28.820 "name": "Existed_Raid", 00:32:28.820 "aliases": [ 00:32:28.820 "e078326f-f701-4bb5-a727-db4a9b625bf7" 00:32:28.820 ], 00:32:28.820 "product_name": "Raid Volume", 00:32:28.820 "block_size": 512, 00:32:28.820 "num_blocks": 196608, 00:32:28.820 "uuid": "e078326f-f701-4bb5-a727-db4a9b625bf7", 00:32:28.820 "assigned_rate_limits": { 00:32:28.820 "rw_ios_per_sec": 0, 00:32:28.820 "rw_mbytes_per_sec": 0, 00:32:28.820 "r_mbytes_per_sec": 0, 00:32:28.820 "w_mbytes_per_sec": 0 00:32:28.820 }, 00:32:28.820 "claimed": false, 00:32:28.820 "zoned": false, 00:32:28.820 "supported_io_types": { 00:32:28.820 "read": true, 00:32:28.820 "write": true, 00:32:28.820 "unmap": false, 00:32:28.820 "flush": false, 00:32:28.820 "reset": true, 00:32:28.820 "nvme_admin": false, 00:32:28.820 "nvme_io": false, 00:32:28.820 "nvme_io_md": false, 00:32:28.820 "write_zeroes": true, 00:32:28.820 "zcopy": false, 00:32:28.820 "get_zone_info": false, 00:32:28.820 "zone_management": false, 00:32:28.821 "zone_append": false, 00:32:28.821 "compare": false, 00:32:28.821 "compare_and_write": false, 00:32:28.821 "abort": false, 00:32:28.821 "seek_hole": false, 00:32:28.821 "seek_data": false, 00:32:28.821 "copy": false, 00:32:28.821 "nvme_iov_md": false 00:32:28.821 }, 00:32:28.821 "driver_specific": { 00:32:28.821 "raid": { 00:32:28.821 "uuid": "e078326f-f701-4bb5-a727-db4a9b625bf7", 00:32:28.821 "strip_size_kb": 64, 00:32:28.821 "state": "online", 00:32:28.821 "raid_level": "raid5f", 00:32:28.821 "superblock": false, 00:32:28.821 "num_base_bdevs": 4, 00:32:28.821 "num_base_bdevs_discovered": 4, 00:32:28.821 "num_base_bdevs_operational": 4, 00:32:28.821 "base_bdevs_list": [ 00:32:28.821 { 00:32:28.821 "name": "BaseBdev1", 00:32:28.821 "uuid": "37e67480-e4bf-417d-8f2b-98efd92089e3", 00:32:28.821 "is_configured": true, 00:32:28.821 "data_offset": 0, 00:32:28.821 "data_size": 65536 00:32:28.821 }, 00:32:28.821 { 00:32:28.821 "name": "BaseBdev2", 00:32:28.821 "uuid": "0e9b5ae9-33d5-45f6-8aeb-b36537c37f66", 00:32:28.821 "is_configured": true, 00:32:28.821 "data_offset": 0, 00:32:28.821 "data_size": 65536 00:32:28.821 }, 00:32:28.821 { 00:32:28.821 "name": "BaseBdev3", 00:32:28.821 "uuid": "3090f9ee-6701-4f24-9c24-b9d3422f1489", 00:32:28.821 "is_configured": true, 00:32:28.821 "data_offset": 0, 00:32:28.821 "data_size": 65536 00:32:28.821 }, 00:32:28.821 { 00:32:28.821 "name": "BaseBdev4", 00:32:28.821 "uuid": "16be79ea-cd8c-4b11-a718-1545f547c74f", 00:32:28.821 "is_configured": true, 00:32:28.821 "data_offset": 0, 00:32:28.821 "data_size": 65536 00:32:28.821 } 00:32:28.821 ] 00:32:28.821 } 00:32:28.821 } 00:32:28.821 }' 00:32:28.821 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:28.821 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:28.821 BaseBdev2 00:32:28.821 BaseBdev3 00:32:28.821 BaseBdev4' 00:32:28.821 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:28.821 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:28.821 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:28.821 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:28.821 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:28.821 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.821 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.821 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.080 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:29.080 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:29.080 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:29.080 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:29.080 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.080 13:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.080 13:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.080 [2024-10-28 13:43:43.159982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.080 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:29.080 "name": "Existed_Raid", 00:32:29.080 "uuid": "e078326f-f701-4bb5-a727-db4a9b625bf7", 00:32:29.080 "strip_size_kb": 64, 00:32:29.080 "state": "online", 00:32:29.080 "raid_level": "raid5f", 00:32:29.080 "superblock": false, 00:32:29.080 "num_base_bdevs": 4, 00:32:29.080 "num_base_bdevs_discovered": 3, 00:32:29.080 "num_base_bdevs_operational": 3, 00:32:29.080 "base_bdevs_list": [ 00:32:29.080 { 00:32:29.080 "name": null, 00:32:29.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:29.080 "is_configured": false, 00:32:29.080 "data_offset": 0, 00:32:29.080 "data_size": 65536 00:32:29.080 }, 00:32:29.080 { 00:32:29.080 "name": "BaseBdev2", 00:32:29.080 "uuid": "0e9b5ae9-33d5-45f6-8aeb-b36537c37f66", 00:32:29.080 "is_configured": true, 00:32:29.080 "data_offset": 0, 00:32:29.080 "data_size": 65536 00:32:29.080 }, 00:32:29.080 { 00:32:29.080 "name": "BaseBdev3", 00:32:29.080 "uuid": "3090f9ee-6701-4f24-9c24-b9d3422f1489", 00:32:29.080 "is_configured": true, 00:32:29.080 "data_offset": 0, 00:32:29.080 "data_size": 65536 00:32:29.080 }, 00:32:29.080 { 00:32:29.080 "name": "BaseBdev4", 00:32:29.080 "uuid": "16be79ea-cd8c-4b11-a718-1545f547c74f", 00:32:29.080 "is_configured": true, 00:32:29.080 "data_offset": 0, 00:32:29.080 "data_size": 65536 00:32:29.080 } 00:32:29.080 ] 00:32:29.080 }' 00:32:29.081 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:29.081 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.648 [2024-10-28 13:43:43.763656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:29.648 [2024-10-28 13:43:43.763808] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:29.648 [2024-10-28 13:43:43.774979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.648 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.908 [2024-10-28 13:43:43.835011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.908 [2024-10-28 13:43:43.906873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:29.908 [2024-10-28 13:43:43.906932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.908 BaseBdev2 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.908 13:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.908 [ 00:32:29.908 { 00:32:29.908 "name": "BaseBdev2", 00:32:29.908 "aliases": [ 00:32:29.908 "93c81d39-c8d1-444a-8c16-4db54d6fd799" 00:32:29.908 ], 00:32:29.908 "product_name": "Malloc disk", 00:32:29.908 "block_size": 512, 00:32:29.908 "num_blocks": 65536, 00:32:29.908 "uuid": "93c81d39-c8d1-444a-8c16-4db54d6fd799", 00:32:29.908 "assigned_rate_limits": { 00:32:29.908 "rw_ios_per_sec": 0, 00:32:29.908 "rw_mbytes_per_sec": 0, 00:32:29.908 "r_mbytes_per_sec": 0, 00:32:29.908 "w_mbytes_per_sec": 0 00:32:29.908 }, 00:32:29.908 "claimed": false, 00:32:29.908 "zoned": false, 00:32:29.908 "supported_io_types": { 00:32:29.908 "read": true, 00:32:29.908 "write": true, 00:32:29.908 "unmap": true, 00:32:29.908 "flush": true, 00:32:29.908 "reset": true, 00:32:29.908 "nvme_admin": false, 00:32:29.908 "nvme_io": false, 00:32:29.908 "nvme_io_md": false, 00:32:29.908 "write_zeroes": true, 00:32:29.908 "zcopy": true, 00:32:29.908 "get_zone_info": false, 00:32:29.909 "zone_management": false, 00:32:29.909 "zone_append": false, 00:32:29.909 "compare": false, 00:32:29.909 "compare_and_write": false, 00:32:29.909 "abort": true, 00:32:29.909 "seek_hole": false, 00:32:29.909 "seek_data": false, 00:32:29.909 "copy": true, 00:32:29.909 "nvme_iov_md": false 00:32:29.909 }, 00:32:29.909 "memory_domains": [ 00:32:29.909 { 00:32:29.909 "dma_device_id": "system", 00:32:29.909 "dma_device_type": 1 00:32:29.909 }, 00:32:29.909 { 00:32:29.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.909 "dma_device_type": 2 00:32:29.909 } 00:32:29.909 ], 00:32:29.909 "driver_specific": {} 00:32:29.909 } 00:32:29.909 ] 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.909 BaseBdev3 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.909 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.909 [ 00:32:29.909 { 00:32:29.909 "name": "BaseBdev3", 00:32:29.909 "aliases": [ 00:32:29.909 "74e67ec1-db54-49c4-82ce-ee2961b1a968" 00:32:29.909 ], 00:32:29.909 "product_name": "Malloc disk", 00:32:29.909 "block_size": 512, 00:32:29.909 "num_blocks": 65536, 00:32:29.909 "uuid": "74e67ec1-db54-49c4-82ce-ee2961b1a968", 00:32:29.909 "assigned_rate_limits": { 00:32:29.909 "rw_ios_per_sec": 0, 00:32:29.909 "rw_mbytes_per_sec": 0, 00:32:29.909 "r_mbytes_per_sec": 0, 00:32:29.909 "w_mbytes_per_sec": 0 00:32:29.909 }, 00:32:29.909 "claimed": false, 00:32:29.909 "zoned": false, 00:32:29.909 "supported_io_types": { 00:32:29.909 "read": true, 00:32:29.909 "write": true, 00:32:29.909 "unmap": true, 00:32:29.909 "flush": true, 00:32:30.168 "reset": true, 00:32:30.168 "nvme_admin": false, 00:32:30.168 "nvme_io": false, 00:32:30.168 "nvme_io_md": false, 00:32:30.168 "write_zeroes": true, 00:32:30.168 "zcopy": true, 00:32:30.168 "get_zone_info": false, 00:32:30.168 "zone_management": false, 00:32:30.168 "zone_append": false, 00:32:30.168 "compare": false, 00:32:30.168 "compare_and_write": false, 00:32:30.168 "abort": true, 00:32:30.168 "seek_hole": false, 00:32:30.168 "seek_data": false, 00:32:30.168 "copy": true, 00:32:30.168 "nvme_iov_md": false 00:32:30.168 }, 00:32:30.168 "memory_domains": [ 00:32:30.168 { 00:32:30.168 "dma_device_id": "system", 00:32:30.168 "dma_device_type": 1 00:32:30.168 }, 00:32:30.168 { 00:32:30.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.168 "dma_device_type": 2 00:32:30.168 } 00:32:30.168 ], 00:32:30.168 "driver_specific": {} 00:32:30.168 } 00:32:30.168 ] 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.168 BaseBdev4 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.168 [ 00:32:30.168 { 00:32:30.168 "name": "BaseBdev4", 00:32:30.168 "aliases": [ 00:32:30.168 "d51c2134-d714-4172-b46d-f2dfc7c1dbcf" 00:32:30.168 ], 00:32:30.168 "product_name": "Malloc disk", 00:32:30.168 "block_size": 512, 00:32:30.168 "num_blocks": 65536, 00:32:30.168 "uuid": "d51c2134-d714-4172-b46d-f2dfc7c1dbcf", 00:32:30.168 "assigned_rate_limits": { 00:32:30.168 "rw_ios_per_sec": 0, 00:32:30.168 "rw_mbytes_per_sec": 0, 00:32:30.168 "r_mbytes_per_sec": 0, 00:32:30.168 "w_mbytes_per_sec": 0 00:32:30.168 }, 00:32:30.168 "claimed": false, 00:32:30.168 "zoned": false, 00:32:30.168 "supported_io_types": { 00:32:30.168 "read": true, 00:32:30.168 "write": true, 00:32:30.168 "unmap": true, 00:32:30.168 "flush": true, 00:32:30.168 "reset": true, 00:32:30.168 "nvme_admin": false, 00:32:30.168 "nvme_io": false, 00:32:30.168 "nvme_io_md": false, 00:32:30.168 "write_zeroes": true, 00:32:30.168 "zcopy": true, 00:32:30.168 "get_zone_info": false, 00:32:30.168 "zone_management": false, 00:32:30.168 "zone_append": false, 00:32:30.168 "compare": false, 00:32:30.168 "compare_and_write": false, 00:32:30.168 "abort": true, 00:32:30.168 "seek_hole": false, 00:32:30.168 "seek_data": false, 00:32:30.168 "copy": true, 00:32:30.168 "nvme_iov_md": false 00:32:30.168 }, 00:32:30.168 "memory_domains": [ 00:32:30.168 { 00:32:30.168 "dma_device_id": "system", 00:32:30.168 "dma_device_type": 1 00:32:30.168 }, 00:32:30.168 { 00:32:30.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.168 "dma_device_type": 2 00:32:30.168 } 00:32:30.168 ], 00:32:30.168 "driver_specific": {} 00:32:30.168 } 00:32:30.168 ] 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.168 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.168 [2024-10-28 13:43:44.131559] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:30.168 [2024-10-28 13:43:44.131750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:30.168 [2024-10-28 13:43:44.131885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:30.168 [2024-10-28 13:43:44.134497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:30.169 [2024-10-28 13:43:44.134680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:30.169 "name": "Existed_Raid", 00:32:30.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.169 "strip_size_kb": 64, 00:32:30.169 "state": "configuring", 00:32:30.169 "raid_level": "raid5f", 00:32:30.169 "superblock": false, 00:32:30.169 "num_base_bdevs": 4, 00:32:30.169 "num_base_bdevs_discovered": 3, 00:32:30.169 "num_base_bdevs_operational": 4, 00:32:30.169 "base_bdevs_list": [ 00:32:30.169 { 00:32:30.169 "name": "BaseBdev1", 00:32:30.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.169 "is_configured": false, 00:32:30.169 "data_offset": 0, 00:32:30.169 "data_size": 0 00:32:30.169 }, 00:32:30.169 { 00:32:30.169 "name": "BaseBdev2", 00:32:30.169 "uuid": "93c81d39-c8d1-444a-8c16-4db54d6fd799", 00:32:30.169 "is_configured": true, 00:32:30.169 "data_offset": 0, 00:32:30.169 "data_size": 65536 00:32:30.169 }, 00:32:30.169 { 00:32:30.169 "name": "BaseBdev3", 00:32:30.169 "uuid": "74e67ec1-db54-49c4-82ce-ee2961b1a968", 00:32:30.169 "is_configured": true, 00:32:30.169 "data_offset": 0, 00:32:30.169 "data_size": 65536 00:32:30.169 }, 00:32:30.169 { 00:32:30.169 "name": "BaseBdev4", 00:32:30.169 "uuid": "d51c2134-d714-4172-b46d-f2dfc7c1dbcf", 00:32:30.169 "is_configured": true, 00:32:30.169 "data_offset": 0, 00:32:30.169 "data_size": 65536 00:32:30.169 } 00:32:30.169 ] 00:32:30.169 }' 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:30.169 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.736 [2024-10-28 13:43:44.691740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:30.736 "name": "Existed_Raid", 00:32:30.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.736 "strip_size_kb": 64, 00:32:30.736 "state": "configuring", 00:32:30.736 "raid_level": "raid5f", 00:32:30.736 "superblock": false, 00:32:30.736 "num_base_bdevs": 4, 00:32:30.736 "num_base_bdevs_discovered": 2, 00:32:30.736 "num_base_bdevs_operational": 4, 00:32:30.736 "base_bdevs_list": [ 00:32:30.736 { 00:32:30.736 "name": "BaseBdev1", 00:32:30.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.736 "is_configured": false, 00:32:30.736 "data_offset": 0, 00:32:30.736 "data_size": 0 00:32:30.736 }, 00:32:30.736 { 00:32:30.736 "name": null, 00:32:30.736 "uuid": "93c81d39-c8d1-444a-8c16-4db54d6fd799", 00:32:30.736 "is_configured": false, 00:32:30.736 "data_offset": 0, 00:32:30.736 "data_size": 65536 00:32:30.736 }, 00:32:30.736 { 00:32:30.736 "name": "BaseBdev3", 00:32:30.736 "uuid": "74e67ec1-db54-49c4-82ce-ee2961b1a968", 00:32:30.736 "is_configured": true, 00:32:30.736 "data_offset": 0, 00:32:30.736 "data_size": 65536 00:32:30.736 }, 00:32:30.736 { 00:32:30.736 "name": "BaseBdev4", 00:32:30.736 "uuid": "d51c2134-d714-4172-b46d-f2dfc7c1dbcf", 00:32:30.736 "is_configured": true, 00:32:30.736 "data_offset": 0, 00:32:30.736 "data_size": 65536 00:32:30.736 } 00:32:30.736 ] 00:32:30.736 }' 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:30.736 13:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.302 [2024-10-28 13:43:45.297955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:31.302 BaseBdev1 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.302 [ 00:32:31.302 { 00:32:31.302 "name": "BaseBdev1", 00:32:31.302 "aliases": [ 00:32:31.302 "f132e0e9-2187-41d8-9d63-00f56bd5ef0b" 00:32:31.302 ], 00:32:31.302 "product_name": "Malloc disk", 00:32:31.302 "block_size": 512, 00:32:31.302 "num_blocks": 65536, 00:32:31.302 "uuid": "f132e0e9-2187-41d8-9d63-00f56bd5ef0b", 00:32:31.302 "assigned_rate_limits": { 00:32:31.302 "rw_ios_per_sec": 0, 00:32:31.302 "rw_mbytes_per_sec": 0, 00:32:31.302 "r_mbytes_per_sec": 0, 00:32:31.302 "w_mbytes_per_sec": 0 00:32:31.302 }, 00:32:31.302 "claimed": true, 00:32:31.302 "claim_type": "exclusive_write", 00:32:31.302 "zoned": false, 00:32:31.302 "supported_io_types": { 00:32:31.302 "read": true, 00:32:31.302 "write": true, 00:32:31.302 "unmap": true, 00:32:31.302 "flush": true, 00:32:31.302 "reset": true, 00:32:31.302 "nvme_admin": false, 00:32:31.302 "nvme_io": false, 00:32:31.302 "nvme_io_md": false, 00:32:31.302 "write_zeroes": true, 00:32:31.302 "zcopy": true, 00:32:31.302 "get_zone_info": false, 00:32:31.302 "zone_management": false, 00:32:31.302 "zone_append": false, 00:32:31.302 "compare": false, 00:32:31.302 "compare_and_write": false, 00:32:31.302 "abort": true, 00:32:31.302 "seek_hole": false, 00:32:31.302 "seek_data": false, 00:32:31.302 "copy": true, 00:32:31.302 "nvme_iov_md": false 00:32:31.302 }, 00:32:31.302 "memory_domains": [ 00:32:31.302 { 00:32:31.302 "dma_device_id": "system", 00:32:31.302 "dma_device_type": 1 00:32:31.302 }, 00:32:31.302 { 00:32:31.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:31.302 "dma_device_type": 2 00:32:31.302 } 00:32:31.302 ], 00:32:31.302 "driver_specific": {} 00:32:31.302 } 00:32:31.302 ] 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.302 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:31.302 "name": "Existed_Raid", 00:32:31.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.302 "strip_size_kb": 64, 00:32:31.302 "state": "configuring", 00:32:31.302 "raid_level": "raid5f", 00:32:31.302 "superblock": false, 00:32:31.302 "num_base_bdevs": 4, 00:32:31.302 "num_base_bdevs_discovered": 3, 00:32:31.302 "num_base_bdevs_operational": 4, 00:32:31.302 "base_bdevs_list": [ 00:32:31.302 { 00:32:31.302 "name": "BaseBdev1", 00:32:31.302 "uuid": "f132e0e9-2187-41d8-9d63-00f56bd5ef0b", 00:32:31.302 "is_configured": true, 00:32:31.302 "data_offset": 0, 00:32:31.302 "data_size": 65536 00:32:31.302 }, 00:32:31.302 { 00:32:31.302 "name": null, 00:32:31.302 "uuid": "93c81d39-c8d1-444a-8c16-4db54d6fd799", 00:32:31.302 "is_configured": false, 00:32:31.302 "data_offset": 0, 00:32:31.302 "data_size": 65536 00:32:31.303 }, 00:32:31.303 { 00:32:31.303 "name": "BaseBdev3", 00:32:31.303 "uuid": "74e67ec1-db54-49c4-82ce-ee2961b1a968", 00:32:31.303 "is_configured": true, 00:32:31.303 "data_offset": 0, 00:32:31.303 "data_size": 65536 00:32:31.303 }, 00:32:31.303 { 00:32:31.303 "name": "BaseBdev4", 00:32:31.303 "uuid": "d51c2134-d714-4172-b46d-f2dfc7c1dbcf", 00:32:31.303 "is_configured": true, 00:32:31.303 "data_offset": 0, 00:32:31.303 "data_size": 65536 00:32:31.303 } 00:32:31.303 ] 00:32:31.303 }' 00:32:31.303 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:31.303 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.868 [2024-10-28 13:43:45.918181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.868 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:31.868 "name": "Existed_Raid", 00:32:31.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.868 "strip_size_kb": 64, 00:32:31.868 "state": "configuring", 00:32:31.868 "raid_level": "raid5f", 00:32:31.868 "superblock": false, 00:32:31.868 "num_base_bdevs": 4, 00:32:31.868 "num_base_bdevs_discovered": 2, 00:32:31.868 "num_base_bdevs_operational": 4, 00:32:31.868 "base_bdevs_list": [ 00:32:31.868 { 00:32:31.868 "name": "BaseBdev1", 00:32:31.868 "uuid": "f132e0e9-2187-41d8-9d63-00f56bd5ef0b", 00:32:31.868 "is_configured": true, 00:32:31.868 "data_offset": 0, 00:32:31.868 "data_size": 65536 00:32:31.868 }, 00:32:31.868 { 00:32:31.868 "name": null, 00:32:31.868 "uuid": "93c81d39-c8d1-444a-8c16-4db54d6fd799", 00:32:31.868 "is_configured": false, 00:32:31.868 "data_offset": 0, 00:32:31.869 "data_size": 65536 00:32:31.869 }, 00:32:31.869 { 00:32:31.869 "name": null, 00:32:31.869 "uuid": "74e67ec1-db54-49c4-82ce-ee2961b1a968", 00:32:31.869 "is_configured": false, 00:32:31.869 "data_offset": 0, 00:32:31.869 "data_size": 65536 00:32:31.869 }, 00:32:31.869 { 00:32:31.869 "name": "BaseBdev4", 00:32:31.869 "uuid": "d51c2134-d714-4172-b46d-f2dfc7c1dbcf", 00:32:31.869 "is_configured": true, 00:32:31.869 "data_offset": 0, 00:32:31.869 "data_size": 65536 00:32:31.869 } 00:32:31.869 ] 00:32:31.869 }' 00:32:31.869 13:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:31.869 13:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.436 [2024-10-28 13:43:46.498460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:32.436 "name": "Existed_Raid", 00:32:32.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.436 "strip_size_kb": 64, 00:32:32.436 "state": "configuring", 00:32:32.436 "raid_level": "raid5f", 00:32:32.436 "superblock": false, 00:32:32.436 "num_base_bdevs": 4, 00:32:32.436 "num_base_bdevs_discovered": 3, 00:32:32.436 "num_base_bdevs_operational": 4, 00:32:32.436 "base_bdevs_list": [ 00:32:32.436 { 00:32:32.436 "name": "BaseBdev1", 00:32:32.436 "uuid": "f132e0e9-2187-41d8-9d63-00f56bd5ef0b", 00:32:32.436 "is_configured": true, 00:32:32.436 "data_offset": 0, 00:32:32.436 "data_size": 65536 00:32:32.436 }, 00:32:32.436 { 00:32:32.436 "name": null, 00:32:32.436 "uuid": "93c81d39-c8d1-444a-8c16-4db54d6fd799", 00:32:32.436 "is_configured": false, 00:32:32.436 "data_offset": 0, 00:32:32.436 "data_size": 65536 00:32:32.436 }, 00:32:32.436 { 00:32:32.436 "name": "BaseBdev3", 00:32:32.436 "uuid": "74e67ec1-db54-49c4-82ce-ee2961b1a968", 00:32:32.436 "is_configured": true, 00:32:32.436 "data_offset": 0, 00:32:32.436 "data_size": 65536 00:32:32.436 }, 00:32:32.436 { 00:32:32.436 "name": "BaseBdev4", 00:32:32.436 "uuid": "d51c2134-d714-4172-b46d-f2dfc7c1dbcf", 00:32:32.436 "is_configured": true, 00:32:32.436 "data_offset": 0, 00:32:32.436 "data_size": 65536 00:32:32.436 } 00:32:32.436 ] 00:32:32.436 }' 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:32.436 13:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.002 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:33.002 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.002 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.002 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:33.002 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.002 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:32:33.002 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:33.002 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.002 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.002 [2024-10-28 13:43:47.082616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:33.002 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.002 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:33.002 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:33.002 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:33.003 "name": "Existed_Raid", 00:32:33.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.003 "strip_size_kb": 64, 00:32:33.003 "state": "configuring", 00:32:33.003 "raid_level": "raid5f", 00:32:33.003 "superblock": false, 00:32:33.003 "num_base_bdevs": 4, 00:32:33.003 "num_base_bdevs_discovered": 2, 00:32:33.003 "num_base_bdevs_operational": 4, 00:32:33.003 "base_bdevs_list": [ 00:32:33.003 { 00:32:33.003 "name": null, 00:32:33.003 "uuid": "f132e0e9-2187-41d8-9d63-00f56bd5ef0b", 00:32:33.003 "is_configured": false, 00:32:33.003 "data_offset": 0, 00:32:33.003 "data_size": 65536 00:32:33.003 }, 00:32:33.003 { 00:32:33.003 "name": null, 00:32:33.003 "uuid": "93c81d39-c8d1-444a-8c16-4db54d6fd799", 00:32:33.003 "is_configured": false, 00:32:33.003 "data_offset": 0, 00:32:33.003 "data_size": 65536 00:32:33.003 }, 00:32:33.003 { 00:32:33.003 "name": "BaseBdev3", 00:32:33.003 "uuid": "74e67ec1-db54-49c4-82ce-ee2961b1a968", 00:32:33.003 "is_configured": true, 00:32:33.003 "data_offset": 0, 00:32:33.003 "data_size": 65536 00:32:33.003 }, 00:32:33.003 { 00:32:33.003 "name": "BaseBdev4", 00:32:33.003 "uuid": "d51c2134-d714-4172-b46d-f2dfc7c1dbcf", 00:32:33.003 "is_configured": true, 00:32:33.003 "data_offset": 0, 00:32:33.003 "data_size": 65536 00:32:33.003 } 00:32:33.003 ] 00:32:33.003 }' 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:33.003 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.571 [2024-10-28 13:43:47.676677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.571 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.830 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:33.830 "name": "Existed_Raid", 00:32:33.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.830 "strip_size_kb": 64, 00:32:33.830 "state": "configuring", 00:32:33.830 "raid_level": "raid5f", 00:32:33.830 "superblock": false, 00:32:33.830 "num_base_bdevs": 4, 00:32:33.830 "num_base_bdevs_discovered": 3, 00:32:33.830 "num_base_bdevs_operational": 4, 00:32:33.830 "base_bdevs_list": [ 00:32:33.830 { 00:32:33.830 "name": null, 00:32:33.830 "uuid": "f132e0e9-2187-41d8-9d63-00f56bd5ef0b", 00:32:33.830 "is_configured": false, 00:32:33.830 "data_offset": 0, 00:32:33.830 "data_size": 65536 00:32:33.830 }, 00:32:33.830 { 00:32:33.830 "name": "BaseBdev2", 00:32:33.830 "uuid": "93c81d39-c8d1-444a-8c16-4db54d6fd799", 00:32:33.830 "is_configured": true, 00:32:33.830 "data_offset": 0, 00:32:33.830 "data_size": 65536 00:32:33.830 }, 00:32:33.830 { 00:32:33.830 "name": "BaseBdev3", 00:32:33.830 "uuid": "74e67ec1-db54-49c4-82ce-ee2961b1a968", 00:32:33.830 "is_configured": true, 00:32:33.830 "data_offset": 0, 00:32:33.830 "data_size": 65536 00:32:33.830 }, 00:32:33.830 { 00:32:33.830 "name": "BaseBdev4", 00:32:33.830 "uuid": "d51c2134-d714-4172-b46d-f2dfc7c1dbcf", 00:32:33.830 "is_configured": true, 00:32:33.830 "data_offset": 0, 00:32:33.830 "data_size": 65536 00:32:33.830 } 00:32:33.830 ] 00:32:33.830 }' 00:32:33.830 13:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:33.830 13:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.089 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.089 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.089 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.089 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:34.089 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f132e0e9-2187-41d8-9d63-00f56bd5ef0b 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.348 [2024-10-28 13:43:48.334660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:34.348 [2024-10-28 13:43:48.334708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:34.348 [2024-10-28 13:43:48.334725] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:34.348 [2024-10-28 13:43:48.335007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:32:34.348 [2024-10-28 13:43:48.335674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:34.348 [2024-10-28 13:43:48.335692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:34.348 [2024-10-28 13:43:48.335963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:34.348 NewBaseBdev 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.348 [ 00:32:34.348 { 00:32:34.348 "name": "NewBaseBdev", 00:32:34.348 "aliases": [ 00:32:34.348 "f132e0e9-2187-41d8-9d63-00f56bd5ef0b" 00:32:34.348 ], 00:32:34.348 "product_name": "Malloc disk", 00:32:34.348 "block_size": 512, 00:32:34.348 "num_blocks": 65536, 00:32:34.348 "uuid": "f132e0e9-2187-41d8-9d63-00f56bd5ef0b", 00:32:34.348 "assigned_rate_limits": { 00:32:34.348 "rw_ios_per_sec": 0, 00:32:34.348 "rw_mbytes_per_sec": 0, 00:32:34.348 "r_mbytes_per_sec": 0, 00:32:34.348 "w_mbytes_per_sec": 0 00:32:34.348 }, 00:32:34.348 "claimed": true, 00:32:34.348 "claim_type": "exclusive_write", 00:32:34.348 "zoned": false, 00:32:34.348 "supported_io_types": { 00:32:34.348 "read": true, 00:32:34.348 "write": true, 00:32:34.348 "unmap": true, 00:32:34.348 "flush": true, 00:32:34.348 "reset": true, 00:32:34.348 "nvme_admin": false, 00:32:34.348 "nvme_io": false, 00:32:34.348 "nvme_io_md": false, 00:32:34.348 "write_zeroes": true, 00:32:34.348 "zcopy": true, 00:32:34.348 "get_zone_info": false, 00:32:34.348 "zone_management": false, 00:32:34.348 "zone_append": false, 00:32:34.348 "compare": false, 00:32:34.348 "compare_and_write": false, 00:32:34.348 "abort": true, 00:32:34.348 "seek_hole": false, 00:32:34.348 "seek_data": false, 00:32:34.348 "copy": true, 00:32:34.348 "nvme_iov_md": false 00:32:34.348 }, 00:32:34.348 "memory_domains": [ 00:32:34.348 { 00:32:34.348 "dma_device_id": "system", 00:32:34.348 "dma_device_type": 1 00:32:34.348 }, 00:32:34.348 { 00:32:34.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:34.348 "dma_device_type": 2 00:32:34.348 } 00:32:34.348 ], 00:32:34.348 "driver_specific": {} 00:32:34.348 } 00:32:34.348 ] 00:32:34.348 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:34.349 "name": "Existed_Raid", 00:32:34.349 "uuid": "dd7dce65-07ce-484f-aff6-a24fc7996bcf", 00:32:34.349 "strip_size_kb": 64, 00:32:34.349 "state": "online", 00:32:34.349 "raid_level": "raid5f", 00:32:34.349 "superblock": false, 00:32:34.349 "num_base_bdevs": 4, 00:32:34.349 "num_base_bdevs_discovered": 4, 00:32:34.349 "num_base_bdevs_operational": 4, 00:32:34.349 "base_bdevs_list": [ 00:32:34.349 { 00:32:34.349 "name": "NewBaseBdev", 00:32:34.349 "uuid": "f132e0e9-2187-41d8-9d63-00f56bd5ef0b", 00:32:34.349 "is_configured": true, 00:32:34.349 "data_offset": 0, 00:32:34.349 "data_size": 65536 00:32:34.349 }, 00:32:34.349 { 00:32:34.349 "name": "BaseBdev2", 00:32:34.349 "uuid": "93c81d39-c8d1-444a-8c16-4db54d6fd799", 00:32:34.349 "is_configured": true, 00:32:34.349 "data_offset": 0, 00:32:34.349 "data_size": 65536 00:32:34.349 }, 00:32:34.349 { 00:32:34.349 "name": "BaseBdev3", 00:32:34.349 "uuid": "74e67ec1-db54-49c4-82ce-ee2961b1a968", 00:32:34.349 "is_configured": true, 00:32:34.349 "data_offset": 0, 00:32:34.349 "data_size": 65536 00:32:34.349 }, 00:32:34.349 { 00:32:34.349 "name": "BaseBdev4", 00:32:34.349 "uuid": "d51c2134-d714-4172-b46d-f2dfc7c1dbcf", 00:32:34.349 "is_configured": true, 00:32:34.349 "data_offset": 0, 00:32:34.349 "data_size": 65536 00:32:34.349 } 00:32:34.349 ] 00:32:34.349 }' 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:34.349 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.917 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:32:34.917 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:34.917 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:34.917 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:34.917 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:34.917 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:34.917 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:34.917 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.917 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:34.917 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.917 [2024-10-28 13:43:48.899206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:34.917 13:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.917 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:34.917 "name": "Existed_Raid", 00:32:34.917 "aliases": [ 00:32:34.917 "dd7dce65-07ce-484f-aff6-a24fc7996bcf" 00:32:34.917 ], 00:32:34.917 "product_name": "Raid Volume", 00:32:34.917 "block_size": 512, 00:32:34.917 "num_blocks": 196608, 00:32:34.917 "uuid": "dd7dce65-07ce-484f-aff6-a24fc7996bcf", 00:32:34.917 "assigned_rate_limits": { 00:32:34.917 "rw_ios_per_sec": 0, 00:32:34.917 "rw_mbytes_per_sec": 0, 00:32:34.917 "r_mbytes_per_sec": 0, 00:32:34.917 "w_mbytes_per_sec": 0 00:32:34.917 }, 00:32:34.917 "claimed": false, 00:32:34.917 "zoned": false, 00:32:34.917 "supported_io_types": { 00:32:34.917 "read": true, 00:32:34.917 "write": true, 00:32:34.917 "unmap": false, 00:32:34.917 "flush": false, 00:32:34.917 "reset": true, 00:32:34.917 "nvme_admin": false, 00:32:34.917 "nvme_io": false, 00:32:34.917 "nvme_io_md": false, 00:32:34.917 "write_zeroes": true, 00:32:34.917 "zcopy": false, 00:32:34.917 "get_zone_info": false, 00:32:34.917 "zone_management": false, 00:32:34.917 "zone_append": false, 00:32:34.917 "compare": false, 00:32:34.917 "compare_and_write": false, 00:32:34.917 "abort": false, 00:32:34.917 "seek_hole": false, 00:32:34.917 "seek_data": false, 00:32:34.917 "copy": false, 00:32:34.917 "nvme_iov_md": false 00:32:34.917 }, 00:32:34.917 "driver_specific": { 00:32:34.917 "raid": { 00:32:34.917 "uuid": "dd7dce65-07ce-484f-aff6-a24fc7996bcf", 00:32:34.917 "strip_size_kb": 64, 00:32:34.917 "state": "online", 00:32:34.918 "raid_level": "raid5f", 00:32:34.918 "superblock": false, 00:32:34.918 "num_base_bdevs": 4, 00:32:34.918 "num_base_bdevs_discovered": 4, 00:32:34.918 "num_base_bdevs_operational": 4, 00:32:34.918 "base_bdevs_list": [ 00:32:34.918 { 00:32:34.918 "name": "NewBaseBdev", 00:32:34.918 "uuid": "f132e0e9-2187-41d8-9d63-00f56bd5ef0b", 00:32:34.918 "is_configured": true, 00:32:34.918 "data_offset": 0, 00:32:34.918 "data_size": 65536 00:32:34.918 }, 00:32:34.918 { 00:32:34.918 "name": "BaseBdev2", 00:32:34.918 "uuid": "93c81d39-c8d1-444a-8c16-4db54d6fd799", 00:32:34.918 "is_configured": true, 00:32:34.918 "data_offset": 0, 00:32:34.918 "data_size": 65536 00:32:34.918 }, 00:32:34.918 { 00:32:34.918 "name": "BaseBdev3", 00:32:34.918 "uuid": "74e67ec1-db54-49c4-82ce-ee2961b1a968", 00:32:34.918 "is_configured": true, 00:32:34.918 "data_offset": 0, 00:32:34.918 "data_size": 65536 00:32:34.918 }, 00:32:34.918 { 00:32:34.918 "name": "BaseBdev4", 00:32:34.918 "uuid": "d51c2134-d714-4172-b46d-f2dfc7c1dbcf", 00:32:34.918 "is_configured": true, 00:32:34.918 "data_offset": 0, 00:32:34.918 "data_size": 65536 00:32:34.918 } 00:32:34.918 ] 00:32:34.918 } 00:32:34.918 } 00:32:34.918 }' 00:32:34.918 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:34.918 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:32:34.918 BaseBdev2 00:32:34.918 BaseBdev3 00:32:34.918 BaseBdev4' 00:32:34.918 13:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:34.918 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:34.918 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:34.918 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:32:34.918 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.918 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:34.918 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.918 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.177 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.177 [2024-10-28 13:43:49.262942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:35.177 [2024-10-28 13:43:49.263124] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:35.178 [2024-10-28 13:43:49.263267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:35.178 [2024-10-28 13:43:49.263610] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:35.178 [2024-10-28 13:43:49.263649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:35.178 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.178 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 95508 00:32:35.178 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 95508 ']' 00:32:35.178 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 95508 00:32:35.178 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:32:35.178 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:35.178 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95508 00:32:35.178 killing process with pid 95508 00:32:35.178 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:35.178 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:35.178 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95508' 00:32:35.178 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 95508 00:32:35.178 [2024-10-28 13:43:49.300955] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:35.178 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 95508 00:32:35.437 [2024-10-28 13:43:49.340921] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:35.437 13:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:32:35.437 00:32:35.437 real 0m11.396s 00:32:35.437 user 0m20.248s 00:32:35.437 sys 0m1.683s 00:32:35.437 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:35.437 13:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.437 ************************************ 00:32:35.437 END TEST raid5f_state_function_test 00:32:35.437 ************************************ 00:32:35.715 13:43:49 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:32:35.715 13:43:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:35.715 13:43:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:35.715 13:43:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:35.715 ************************************ 00:32:35.715 START TEST raid5f_state_function_test_sb 00:32:35.715 ************************************ 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:35.715 Process raid pid: 96174 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=96174 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96174' 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 96174 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 96174 ']' 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:35.715 13:43:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:35.715 [2024-10-28 13:43:49.745074] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:32:35.715 [2024-10-28 13:43:49.745532] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:35.973 [2024-10-28 13:43:49.899208] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:35.973 [2024-10-28 13:43:49.923053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.973 [2024-10-28 13:43:49.967483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.973 [2024-10-28 13:43:50.026391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:35.973 [2024-10-28 13:43:50.026449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:36.539 13:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:36.539 13:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:32:36.539 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:36.539 13:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.539 13:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.539 [2024-10-28 13:43:50.693789] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:36.539 [2024-10-28 13:43:50.693895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:36.539 [2024-10-28 13:43:50.693930] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:36.539 [2024-10-28 13:43:50.693942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:36.539 [2024-10-28 13:43:50.693957] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:36.539 [2024-10-28 13:43:50.693968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:36.539 [2024-10-28 13:43:50.693980] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:36.539 [2024-10-28 13:43:50.693990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:36.798 "name": "Existed_Raid", 00:32:36.798 "uuid": "b4095973-32ca-4519-82b2-3c35f6025eaf", 00:32:36.798 "strip_size_kb": 64, 00:32:36.798 "state": "configuring", 00:32:36.798 "raid_level": "raid5f", 00:32:36.798 "superblock": true, 00:32:36.798 "num_base_bdevs": 4, 00:32:36.798 "num_base_bdevs_discovered": 0, 00:32:36.798 "num_base_bdevs_operational": 4, 00:32:36.798 "base_bdevs_list": [ 00:32:36.798 { 00:32:36.798 "name": "BaseBdev1", 00:32:36.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.798 "is_configured": false, 00:32:36.798 "data_offset": 0, 00:32:36.798 "data_size": 0 00:32:36.798 }, 00:32:36.798 { 00:32:36.798 "name": "BaseBdev2", 00:32:36.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.798 "is_configured": false, 00:32:36.798 "data_offset": 0, 00:32:36.798 "data_size": 0 00:32:36.798 }, 00:32:36.798 { 00:32:36.798 "name": "BaseBdev3", 00:32:36.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.798 "is_configured": false, 00:32:36.798 "data_offset": 0, 00:32:36.798 "data_size": 0 00:32:36.798 }, 00:32:36.798 { 00:32:36.798 "name": "BaseBdev4", 00:32:36.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.798 "is_configured": false, 00:32:36.798 "data_offset": 0, 00:32:36.798 "data_size": 0 00:32:36.798 } 00:32:36.798 ] 00:32:36.798 }' 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:36.798 13:43:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.364 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:37.364 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.364 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.364 [2024-10-28 13:43:51.229815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:37.364 [2024-10-28 13:43:51.229855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:32:37.364 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.365 [2024-10-28 13:43:51.237844] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:37.365 [2024-10-28 13:43:51.237892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:37.365 [2024-10-28 13:43:51.237927] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:37.365 [2024-10-28 13:43:51.237939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:37.365 [2024-10-28 13:43:51.237951] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:37.365 [2024-10-28 13:43:51.237963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:37.365 [2024-10-28 13:43:51.237974] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:37.365 [2024-10-28 13:43:51.237985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.365 [2024-10-28 13:43:51.258518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:37.365 BaseBdev1 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.365 [ 00:32:37.365 { 00:32:37.365 "name": "BaseBdev1", 00:32:37.365 "aliases": [ 00:32:37.365 "a2f0aad8-129a-4c65-af0a-68f186eaa59a" 00:32:37.365 ], 00:32:37.365 "product_name": "Malloc disk", 00:32:37.365 "block_size": 512, 00:32:37.365 "num_blocks": 65536, 00:32:37.365 "uuid": "a2f0aad8-129a-4c65-af0a-68f186eaa59a", 00:32:37.365 "assigned_rate_limits": { 00:32:37.365 "rw_ios_per_sec": 0, 00:32:37.365 "rw_mbytes_per_sec": 0, 00:32:37.365 "r_mbytes_per_sec": 0, 00:32:37.365 "w_mbytes_per_sec": 0 00:32:37.365 }, 00:32:37.365 "claimed": true, 00:32:37.365 "claim_type": "exclusive_write", 00:32:37.365 "zoned": false, 00:32:37.365 "supported_io_types": { 00:32:37.365 "read": true, 00:32:37.365 "write": true, 00:32:37.365 "unmap": true, 00:32:37.365 "flush": true, 00:32:37.365 "reset": true, 00:32:37.365 "nvme_admin": false, 00:32:37.365 "nvme_io": false, 00:32:37.365 "nvme_io_md": false, 00:32:37.365 "write_zeroes": true, 00:32:37.365 "zcopy": true, 00:32:37.365 "get_zone_info": false, 00:32:37.365 "zone_management": false, 00:32:37.365 "zone_append": false, 00:32:37.365 "compare": false, 00:32:37.365 "compare_and_write": false, 00:32:37.365 "abort": true, 00:32:37.365 "seek_hole": false, 00:32:37.365 "seek_data": false, 00:32:37.365 "copy": true, 00:32:37.365 "nvme_iov_md": false 00:32:37.365 }, 00:32:37.365 "memory_domains": [ 00:32:37.365 { 00:32:37.365 "dma_device_id": "system", 00:32:37.365 "dma_device_type": 1 00:32:37.365 }, 00:32:37.365 { 00:32:37.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:37.365 "dma_device_type": 2 00:32:37.365 } 00:32:37.365 ], 00:32:37.365 "driver_specific": {} 00:32:37.365 } 00:32:37.365 ] 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:37.365 "name": "Existed_Raid", 00:32:37.365 "uuid": "a6347841-dc99-4b8a-97ef-ec8a44d483ba", 00:32:37.365 "strip_size_kb": 64, 00:32:37.365 "state": "configuring", 00:32:37.365 "raid_level": "raid5f", 00:32:37.365 "superblock": true, 00:32:37.365 "num_base_bdevs": 4, 00:32:37.365 "num_base_bdevs_discovered": 1, 00:32:37.365 "num_base_bdevs_operational": 4, 00:32:37.365 "base_bdevs_list": [ 00:32:37.365 { 00:32:37.365 "name": "BaseBdev1", 00:32:37.365 "uuid": "a2f0aad8-129a-4c65-af0a-68f186eaa59a", 00:32:37.365 "is_configured": true, 00:32:37.365 "data_offset": 2048, 00:32:37.365 "data_size": 63488 00:32:37.365 }, 00:32:37.365 { 00:32:37.365 "name": "BaseBdev2", 00:32:37.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.365 "is_configured": false, 00:32:37.365 "data_offset": 0, 00:32:37.365 "data_size": 0 00:32:37.365 }, 00:32:37.365 { 00:32:37.365 "name": "BaseBdev3", 00:32:37.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.365 "is_configured": false, 00:32:37.365 "data_offset": 0, 00:32:37.365 "data_size": 0 00:32:37.365 }, 00:32:37.365 { 00:32:37.365 "name": "BaseBdev4", 00:32:37.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.365 "is_configured": false, 00:32:37.365 "data_offset": 0, 00:32:37.365 "data_size": 0 00:32:37.365 } 00:32:37.365 ] 00:32:37.365 }' 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:37.365 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.932 [2024-10-28 13:43:51.806756] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:37.932 [2024-10-28 13:43:51.806831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.932 [2024-10-28 13:43:51.818826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:37.932 [2024-10-28 13:43:51.821635] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:37.932 [2024-10-28 13:43:51.821842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:37.932 [2024-10-28 13:43:51.821981] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:37.932 [2024-10-28 13:43:51.822103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:37.932 [2024-10-28 13:43:51.822133] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:37.932 [2024-10-28 13:43:51.822195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.932 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:37.932 "name": "Existed_Raid", 00:32:37.932 "uuid": "b9d5971e-1935-44a6-9856-c51c1f4abe78", 00:32:37.932 "strip_size_kb": 64, 00:32:37.932 "state": "configuring", 00:32:37.932 "raid_level": "raid5f", 00:32:37.932 "superblock": true, 00:32:37.932 "num_base_bdevs": 4, 00:32:37.932 "num_base_bdevs_discovered": 1, 00:32:37.932 "num_base_bdevs_operational": 4, 00:32:37.932 "base_bdevs_list": [ 00:32:37.932 { 00:32:37.932 "name": "BaseBdev1", 00:32:37.932 "uuid": "a2f0aad8-129a-4c65-af0a-68f186eaa59a", 00:32:37.932 "is_configured": true, 00:32:37.933 "data_offset": 2048, 00:32:37.933 "data_size": 63488 00:32:37.933 }, 00:32:37.933 { 00:32:37.933 "name": "BaseBdev2", 00:32:37.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.933 "is_configured": false, 00:32:37.933 "data_offset": 0, 00:32:37.933 "data_size": 0 00:32:37.933 }, 00:32:37.933 { 00:32:37.933 "name": "BaseBdev3", 00:32:37.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.933 "is_configured": false, 00:32:37.933 "data_offset": 0, 00:32:37.933 "data_size": 0 00:32:37.933 }, 00:32:37.933 { 00:32:37.933 "name": "BaseBdev4", 00:32:37.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.933 "is_configured": false, 00:32:37.933 "data_offset": 0, 00:32:37.933 "data_size": 0 00:32:37.933 } 00:32:37.933 ] 00:32:37.933 }' 00:32:37.933 13:43:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:37.933 13:43:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.191 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:38.192 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.192 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.450 [2024-10-28 13:43:52.357087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:38.450 BaseBdev2 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.450 [ 00:32:38.450 { 00:32:38.450 "name": "BaseBdev2", 00:32:38.450 "aliases": [ 00:32:38.450 "73ef31e5-00de-46f9-a82a-69ac8d1ba229" 00:32:38.450 ], 00:32:38.450 "product_name": "Malloc disk", 00:32:38.450 "block_size": 512, 00:32:38.450 "num_blocks": 65536, 00:32:38.450 "uuid": "73ef31e5-00de-46f9-a82a-69ac8d1ba229", 00:32:38.450 "assigned_rate_limits": { 00:32:38.450 "rw_ios_per_sec": 0, 00:32:38.450 "rw_mbytes_per_sec": 0, 00:32:38.450 "r_mbytes_per_sec": 0, 00:32:38.450 "w_mbytes_per_sec": 0 00:32:38.450 }, 00:32:38.450 "claimed": true, 00:32:38.450 "claim_type": "exclusive_write", 00:32:38.450 "zoned": false, 00:32:38.450 "supported_io_types": { 00:32:38.450 "read": true, 00:32:38.450 "write": true, 00:32:38.450 "unmap": true, 00:32:38.450 "flush": true, 00:32:38.450 "reset": true, 00:32:38.450 "nvme_admin": false, 00:32:38.450 "nvme_io": false, 00:32:38.450 "nvme_io_md": false, 00:32:38.450 "write_zeroes": true, 00:32:38.450 "zcopy": true, 00:32:38.450 "get_zone_info": false, 00:32:38.450 "zone_management": false, 00:32:38.450 "zone_append": false, 00:32:38.450 "compare": false, 00:32:38.450 "compare_and_write": false, 00:32:38.450 "abort": true, 00:32:38.450 "seek_hole": false, 00:32:38.450 "seek_data": false, 00:32:38.450 "copy": true, 00:32:38.450 "nvme_iov_md": false 00:32:38.450 }, 00:32:38.450 "memory_domains": [ 00:32:38.450 { 00:32:38.450 "dma_device_id": "system", 00:32:38.450 "dma_device_type": 1 00:32:38.450 }, 00:32:38.450 { 00:32:38.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:38.450 "dma_device_type": 2 00:32:38.450 } 00:32:38.450 ], 00:32:38.450 "driver_specific": {} 00:32:38.450 } 00:32:38.450 ] 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.450 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:38.450 "name": "Existed_Raid", 00:32:38.450 "uuid": "b9d5971e-1935-44a6-9856-c51c1f4abe78", 00:32:38.451 "strip_size_kb": 64, 00:32:38.451 "state": "configuring", 00:32:38.451 "raid_level": "raid5f", 00:32:38.451 "superblock": true, 00:32:38.451 "num_base_bdevs": 4, 00:32:38.451 "num_base_bdevs_discovered": 2, 00:32:38.451 "num_base_bdevs_operational": 4, 00:32:38.451 "base_bdevs_list": [ 00:32:38.451 { 00:32:38.451 "name": "BaseBdev1", 00:32:38.451 "uuid": "a2f0aad8-129a-4c65-af0a-68f186eaa59a", 00:32:38.451 "is_configured": true, 00:32:38.451 "data_offset": 2048, 00:32:38.451 "data_size": 63488 00:32:38.451 }, 00:32:38.451 { 00:32:38.451 "name": "BaseBdev2", 00:32:38.451 "uuid": "73ef31e5-00de-46f9-a82a-69ac8d1ba229", 00:32:38.451 "is_configured": true, 00:32:38.451 "data_offset": 2048, 00:32:38.451 "data_size": 63488 00:32:38.451 }, 00:32:38.451 { 00:32:38.451 "name": "BaseBdev3", 00:32:38.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.451 "is_configured": false, 00:32:38.451 "data_offset": 0, 00:32:38.451 "data_size": 0 00:32:38.451 }, 00:32:38.451 { 00:32:38.451 "name": "BaseBdev4", 00:32:38.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.451 "is_configured": false, 00:32:38.451 "data_offset": 0, 00:32:38.451 "data_size": 0 00:32:38.451 } 00:32:38.451 ] 00:32:38.451 }' 00:32:38.451 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:38.451 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.020 [2024-10-28 13:43:52.941162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:39.020 BaseBdev3 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.020 [ 00:32:39.020 { 00:32:39.020 "name": "BaseBdev3", 00:32:39.020 "aliases": [ 00:32:39.020 "d8433357-f8a6-4a2d-a68b-355a1d972b05" 00:32:39.020 ], 00:32:39.020 "product_name": "Malloc disk", 00:32:39.020 "block_size": 512, 00:32:39.020 "num_blocks": 65536, 00:32:39.020 "uuid": "d8433357-f8a6-4a2d-a68b-355a1d972b05", 00:32:39.020 "assigned_rate_limits": { 00:32:39.020 "rw_ios_per_sec": 0, 00:32:39.020 "rw_mbytes_per_sec": 0, 00:32:39.020 "r_mbytes_per_sec": 0, 00:32:39.020 "w_mbytes_per_sec": 0 00:32:39.020 }, 00:32:39.020 "claimed": true, 00:32:39.020 "claim_type": "exclusive_write", 00:32:39.020 "zoned": false, 00:32:39.020 "supported_io_types": { 00:32:39.020 "read": true, 00:32:39.020 "write": true, 00:32:39.020 "unmap": true, 00:32:39.020 "flush": true, 00:32:39.020 "reset": true, 00:32:39.020 "nvme_admin": false, 00:32:39.020 "nvme_io": false, 00:32:39.020 "nvme_io_md": false, 00:32:39.020 "write_zeroes": true, 00:32:39.020 "zcopy": true, 00:32:39.020 "get_zone_info": false, 00:32:39.020 "zone_management": false, 00:32:39.020 "zone_append": false, 00:32:39.020 "compare": false, 00:32:39.020 "compare_and_write": false, 00:32:39.020 "abort": true, 00:32:39.020 "seek_hole": false, 00:32:39.020 "seek_data": false, 00:32:39.020 "copy": true, 00:32:39.020 "nvme_iov_md": false 00:32:39.020 }, 00:32:39.020 "memory_domains": [ 00:32:39.020 { 00:32:39.020 "dma_device_id": "system", 00:32:39.020 "dma_device_type": 1 00:32:39.020 }, 00:32:39.020 { 00:32:39.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:39.020 "dma_device_type": 2 00:32:39.020 } 00:32:39.020 ], 00:32:39.020 "driver_specific": {} 00:32:39.020 } 00:32:39.020 ] 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.020 13:43:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.020 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:39.020 "name": "Existed_Raid", 00:32:39.020 "uuid": "b9d5971e-1935-44a6-9856-c51c1f4abe78", 00:32:39.020 "strip_size_kb": 64, 00:32:39.020 "state": "configuring", 00:32:39.020 "raid_level": "raid5f", 00:32:39.020 "superblock": true, 00:32:39.020 "num_base_bdevs": 4, 00:32:39.020 "num_base_bdevs_discovered": 3, 00:32:39.020 "num_base_bdevs_operational": 4, 00:32:39.020 "base_bdevs_list": [ 00:32:39.020 { 00:32:39.020 "name": "BaseBdev1", 00:32:39.020 "uuid": "a2f0aad8-129a-4c65-af0a-68f186eaa59a", 00:32:39.020 "is_configured": true, 00:32:39.020 "data_offset": 2048, 00:32:39.020 "data_size": 63488 00:32:39.020 }, 00:32:39.020 { 00:32:39.020 "name": "BaseBdev2", 00:32:39.020 "uuid": "73ef31e5-00de-46f9-a82a-69ac8d1ba229", 00:32:39.020 "is_configured": true, 00:32:39.020 "data_offset": 2048, 00:32:39.020 "data_size": 63488 00:32:39.020 }, 00:32:39.020 { 00:32:39.020 "name": "BaseBdev3", 00:32:39.020 "uuid": "d8433357-f8a6-4a2d-a68b-355a1d972b05", 00:32:39.020 "is_configured": true, 00:32:39.020 "data_offset": 2048, 00:32:39.020 "data_size": 63488 00:32:39.020 }, 00:32:39.020 { 00:32:39.020 "name": "BaseBdev4", 00:32:39.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:39.020 "is_configured": false, 00:32:39.020 "data_offset": 0, 00:32:39.020 "data_size": 0 00:32:39.020 } 00:32:39.020 ] 00:32:39.020 }' 00:32:39.020 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:39.020 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.587 [2024-10-28 13:43:53.535089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:39.587 [2024-10-28 13:43:53.535453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:32:39.587 [2024-10-28 13:43:53.535490] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:39.587 BaseBdev4 00:32:39.587 [2024-10-28 13:43:53.535956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:32:39.587 [2024-10-28 13:43:53.536615] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:32:39.587 [2024-10-28 13:43:53.536637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:32:39.587 [2024-10-28 13:43:53.536812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.587 [ 00:32:39.587 { 00:32:39.587 "name": "BaseBdev4", 00:32:39.587 "aliases": [ 00:32:39.587 "ec1ede88-ad50-4167-9e83-bbdf09cc6687" 00:32:39.587 ], 00:32:39.587 "product_name": "Malloc disk", 00:32:39.587 "block_size": 512, 00:32:39.587 "num_blocks": 65536, 00:32:39.587 "uuid": "ec1ede88-ad50-4167-9e83-bbdf09cc6687", 00:32:39.587 "assigned_rate_limits": { 00:32:39.587 "rw_ios_per_sec": 0, 00:32:39.587 "rw_mbytes_per_sec": 0, 00:32:39.587 "r_mbytes_per_sec": 0, 00:32:39.587 "w_mbytes_per_sec": 0 00:32:39.587 }, 00:32:39.587 "claimed": true, 00:32:39.587 "claim_type": "exclusive_write", 00:32:39.587 "zoned": false, 00:32:39.587 "supported_io_types": { 00:32:39.587 "read": true, 00:32:39.587 "write": true, 00:32:39.587 "unmap": true, 00:32:39.587 "flush": true, 00:32:39.587 "reset": true, 00:32:39.587 "nvme_admin": false, 00:32:39.587 "nvme_io": false, 00:32:39.587 "nvme_io_md": false, 00:32:39.587 "write_zeroes": true, 00:32:39.587 "zcopy": true, 00:32:39.587 "get_zone_info": false, 00:32:39.587 "zone_management": false, 00:32:39.587 "zone_append": false, 00:32:39.587 "compare": false, 00:32:39.587 "compare_and_write": false, 00:32:39.587 "abort": true, 00:32:39.587 "seek_hole": false, 00:32:39.587 "seek_data": false, 00:32:39.587 "copy": true, 00:32:39.587 "nvme_iov_md": false 00:32:39.587 }, 00:32:39.587 "memory_domains": [ 00:32:39.587 { 00:32:39.587 "dma_device_id": "system", 00:32:39.587 "dma_device_type": 1 00:32:39.587 }, 00:32:39.587 { 00:32:39.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:39.587 "dma_device_type": 2 00:32:39.587 } 00:32:39.587 ], 00:32:39.587 "driver_specific": {} 00:32:39.587 } 00:32:39.587 ] 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.587 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:39.587 "name": "Existed_Raid", 00:32:39.587 "uuid": "b9d5971e-1935-44a6-9856-c51c1f4abe78", 00:32:39.587 "strip_size_kb": 64, 00:32:39.587 "state": "online", 00:32:39.587 "raid_level": "raid5f", 00:32:39.587 "superblock": true, 00:32:39.587 "num_base_bdevs": 4, 00:32:39.587 "num_base_bdevs_discovered": 4, 00:32:39.587 "num_base_bdevs_operational": 4, 00:32:39.587 "base_bdevs_list": [ 00:32:39.587 { 00:32:39.587 "name": "BaseBdev1", 00:32:39.587 "uuid": "a2f0aad8-129a-4c65-af0a-68f186eaa59a", 00:32:39.587 "is_configured": true, 00:32:39.587 "data_offset": 2048, 00:32:39.587 "data_size": 63488 00:32:39.587 }, 00:32:39.587 { 00:32:39.587 "name": "BaseBdev2", 00:32:39.587 "uuid": "73ef31e5-00de-46f9-a82a-69ac8d1ba229", 00:32:39.587 "is_configured": true, 00:32:39.587 "data_offset": 2048, 00:32:39.587 "data_size": 63488 00:32:39.587 }, 00:32:39.587 { 00:32:39.587 "name": "BaseBdev3", 00:32:39.587 "uuid": "d8433357-f8a6-4a2d-a68b-355a1d972b05", 00:32:39.587 "is_configured": true, 00:32:39.587 "data_offset": 2048, 00:32:39.587 "data_size": 63488 00:32:39.587 }, 00:32:39.587 { 00:32:39.587 "name": "BaseBdev4", 00:32:39.587 "uuid": "ec1ede88-ad50-4167-9e83-bbdf09cc6687", 00:32:39.587 "is_configured": true, 00:32:39.587 "data_offset": 2048, 00:32:39.587 "data_size": 63488 00:32:39.588 } 00:32:39.588 ] 00:32:39.588 }' 00:32:39.588 13:43:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:39.588 13:43:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.154 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:40.154 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:40.154 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:40.154 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:40.154 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:40.154 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:40.154 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:40.154 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:40.154 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.154 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.154 [2024-10-28 13:43:54.103629] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:40.155 "name": "Existed_Raid", 00:32:40.155 "aliases": [ 00:32:40.155 "b9d5971e-1935-44a6-9856-c51c1f4abe78" 00:32:40.155 ], 00:32:40.155 "product_name": "Raid Volume", 00:32:40.155 "block_size": 512, 00:32:40.155 "num_blocks": 190464, 00:32:40.155 "uuid": "b9d5971e-1935-44a6-9856-c51c1f4abe78", 00:32:40.155 "assigned_rate_limits": { 00:32:40.155 "rw_ios_per_sec": 0, 00:32:40.155 "rw_mbytes_per_sec": 0, 00:32:40.155 "r_mbytes_per_sec": 0, 00:32:40.155 "w_mbytes_per_sec": 0 00:32:40.155 }, 00:32:40.155 "claimed": false, 00:32:40.155 "zoned": false, 00:32:40.155 "supported_io_types": { 00:32:40.155 "read": true, 00:32:40.155 "write": true, 00:32:40.155 "unmap": false, 00:32:40.155 "flush": false, 00:32:40.155 "reset": true, 00:32:40.155 "nvme_admin": false, 00:32:40.155 "nvme_io": false, 00:32:40.155 "nvme_io_md": false, 00:32:40.155 "write_zeroes": true, 00:32:40.155 "zcopy": false, 00:32:40.155 "get_zone_info": false, 00:32:40.155 "zone_management": false, 00:32:40.155 "zone_append": false, 00:32:40.155 "compare": false, 00:32:40.155 "compare_and_write": false, 00:32:40.155 "abort": false, 00:32:40.155 "seek_hole": false, 00:32:40.155 "seek_data": false, 00:32:40.155 "copy": false, 00:32:40.155 "nvme_iov_md": false 00:32:40.155 }, 00:32:40.155 "driver_specific": { 00:32:40.155 "raid": { 00:32:40.155 "uuid": "b9d5971e-1935-44a6-9856-c51c1f4abe78", 00:32:40.155 "strip_size_kb": 64, 00:32:40.155 "state": "online", 00:32:40.155 "raid_level": "raid5f", 00:32:40.155 "superblock": true, 00:32:40.155 "num_base_bdevs": 4, 00:32:40.155 "num_base_bdevs_discovered": 4, 00:32:40.155 "num_base_bdevs_operational": 4, 00:32:40.155 "base_bdevs_list": [ 00:32:40.155 { 00:32:40.155 "name": "BaseBdev1", 00:32:40.155 "uuid": "a2f0aad8-129a-4c65-af0a-68f186eaa59a", 00:32:40.155 "is_configured": true, 00:32:40.155 "data_offset": 2048, 00:32:40.155 "data_size": 63488 00:32:40.155 }, 00:32:40.155 { 00:32:40.155 "name": "BaseBdev2", 00:32:40.155 "uuid": "73ef31e5-00de-46f9-a82a-69ac8d1ba229", 00:32:40.155 "is_configured": true, 00:32:40.155 "data_offset": 2048, 00:32:40.155 "data_size": 63488 00:32:40.155 }, 00:32:40.155 { 00:32:40.155 "name": "BaseBdev3", 00:32:40.155 "uuid": "d8433357-f8a6-4a2d-a68b-355a1d972b05", 00:32:40.155 "is_configured": true, 00:32:40.155 "data_offset": 2048, 00:32:40.155 "data_size": 63488 00:32:40.155 }, 00:32:40.155 { 00:32:40.155 "name": "BaseBdev4", 00:32:40.155 "uuid": "ec1ede88-ad50-4167-9e83-bbdf09cc6687", 00:32:40.155 "is_configured": true, 00:32:40.155 "data_offset": 2048, 00:32:40.155 "data_size": 63488 00:32:40.155 } 00:32:40.155 ] 00:32:40.155 } 00:32:40.155 } 00:32:40.155 }' 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:40.155 BaseBdev2 00:32:40.155 BaseBdev3 00:32:40.155 BaseBdev4' 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.155 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.414 [2024-10-28 13:43:54.463526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:40.414 "name": "Existed_Raid", 00:32:40.414 "uuid": "b9d5971e-1935-44a6-9856-c51c1f4abe78", 00:32:40.414 "strip_size_kb": 64, 00:32:40.414 "state": "online", 00:32:40.414 "raid_level": "raid5f", 00:32:40.414 "superblock": true, 00:32:40.414 "num_base_bdevs": 4, 00:32:40.414 "num_base_bdevs_discovered": 3, 00:32:40.414 "num_base_bdevs_operational": 3, 00:32:40.414 "base_bdevs_list": [ 00:32:40.414 { 00:32:40.414 "name": null, 00:32:40.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.414 "is_configured": false, 00:32:40.414 "data_offset": 0, 00:32:40.414 "data_size": 63488 00:32:40.414 }, 00:32:40.414 { 00:32:40.414 "name": "BaseBdev2", 00:32:40.414 "uuid": "73ef31e5-00de-46f9-a82a-69ac8d1ba229", 00:32:40.414 "is_configured": true, 00:32:40.414 "data_offset": 2048, 00:32:40.414 "data_size": 63488 00:32:40.414 }, 00:32:40.414 { 00:32:40.414 "name": "BaseBdev3", 00:32:40.414 "uuid": "d8433357-f8a6-4a2d-a68b-355a1d972b05", 00:32:40.414 "is_configured": true, 00:32:40.414 "data_offset": 2048, 00:32:40.414 "data_size": 63488 00:32:40.414 }, 00:32:40.414 { 00:32:40.414 "name": "BaseBdev4", 00:32:40.414 "uuid": "ec1ede88-ad50-4167-9e83-bbdf09cc6687", 00:32:40.414 "is_configured": true, 00:32:40.414 "data_offset": 2048, 00:32:40.414 "data_size": 63488 00:32:40.414 } 00:32:40.414 ] 00:32:40.414 }' 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:40.414 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.982 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:40.982 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:40.982 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.982 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.982 13:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.982 13:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:40.982 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.982 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.983 [2024-10-28 13:43:55.047556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:40.983 [2024-10-28 13:43:55.047769] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:40.983 [2024-10-28 13:43:55.059556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.983 [2024-10-28 13:43:55.115604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.983 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:41.242 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.242 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.243 [2024-10-28 13:43:55.187365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:41.243 [2024-10-28 13:43:55.187472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.243 BaseBdev2 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.243 [ 00:32:41.243 { 00:32:41.243 "name": "BaseBdev2", 00:32:41.243 "aliases": [ 00:32:41.243 "02b2f0c2-ba70-4f0d-b905-1b0d19c3d4f8" 00:32:41.243 ], 00:32:41.243 "product_name": "Malloc disk", 00:32:41.243 "block_size": 512, 00:32:41.243 "num_blocks": 65536, 00:32:41.243 "uuid": "02b2f0c2-ba70-4f0d-b905-1b0d19c3d4f8", 00:32:41.243 "assigned_rate_limits": { 00:32:41.243 "rw_ios_per_sec": 0, 00:32:41.243 "rw_mbytes_per_sec": 0, 00:32:41.243 "r_mbytes_per_sec": 0, 00:32:41.243 "w_mbytes_per_sec": 0 00:32:41.243 }, 00:32:41.243 "claimed": false, 00:32:41.243 "zoned": false, 00:32:41.243 "supported_io_types": { 00:32:41.243 "read": true, 00:32:41.243 "write": true, 00:32:41.243 "unmap": true, 00:32:41.243 "flush": true, 00:32:41.243 "reset": true, 00:32:41.243 "nvme_admin": false, 00:32:41.243 "nvme_io": false, 00:32:41.243 "nvme_io_md": false, 00:32:41.243 "write_zeroes": true, 00:32:41.243 "zcopy": true, 00:32:41.243 "get_zone_info": false, 00:32:41.243 "zone_management": false, 00:32:41.243 "zone_append": false, 00:32:41.243 "compare": false, 00:32:41.243 "compare_and_write": false, 00:32:41.243 "abort": true, 00:32:41.243 "seek_hole": false, 00:32:41.243 "seek_data": false, 00:32:41.243 "copy": true, 00:32:41.243 "nvme_iov_md": false 00:32:41.243 }, 00:32:41.243 "memory_domains": [ 00:32:41.243 { 00:32:41.243 "dma_device_id": "system", 00:32:41.243 "dma_device_type": 1 00:32:41.243 }, 00:32:41.243 { 00:32:41.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:41.243 "dma_device_type": 2 00:32:41.243 } 00:32:41.243 ], 00:32:41.243 "driver_specific": {} 00:32:41.243 } 00:32:41.243 ] 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.243 BaseBdev3 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.243 [ 00:32:41.243 { 00:32:41.243 "name": "BaseBdev3", 00:32:41.243 "aliases": [ 00:32:41.243 "fb89e91d-0b9a-4561-a727-22502e111941" 00:32:41.243 ], 00:32:41.243 "product_name": "Malloc disk", 00:32:41.243 "block_size": 512, 00:32:41.243 "num_blocks": 65536, 00:32:41.243 "uuid": "fb89e91d-0b9a-4561-a727-22502e111941", 00:32:41.243 "assigned_rate_limits": { 00:32:41.243 "rw_ios_per_sec": 0, 00:32:41.243 "rw_mbytes_per_sec": 0, 00:32:41.243 "r_mbytes_per_sec": 0, 00:32:41.243 "w_mbytes_per_sec": 0 00:32:41.243 }, 00:32:41.243 "claimed": false, 00:32:41.243 "zoned": false, 00:32:41.243 "supported_io_types": { 00:32:41.243 "read": true, 00:32:41.243 "write": true, 00:32:41.243 "unmap": true, 00:32:41.243 "flush": true, 00:32:41.243 "reset": true, 00:32:41.243 "nvme_admin": false, 00:32:41.243 "nvme_io": false, 00:32:41.243 "nvme_io_md": false, 00:32:41.243 "write_zeroes": true, 00:32:41.243 "zcopy": true, 00:32:41.243 "get_zone_info": false, 00:32:41.243 "zone_management": false, 00:32:41.243 "zone_append": false, 00:32:41.243 "compare": false, 00:32:41.243 "compare_and_write": false, 00:32:41.243 "abort": true, 00:32:41.243 "seek_hole": false, 00:32:41.243 "seek_data": false, 00:32:41.243 "copy": true, 00:32:41.243 "nvme_iov_md": false 00:32:41.243 }, 00:32:41.243 "memory_domains": [ 00:32:41.243 { 00:32:41.243 "dma_device_id": "system", 00:32:41.243 "dma_device_type": 1 00:32:41.243 }, 00:32:41.243 { 00:32:41.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:41.243 "dma_device_type": 2 00:32:41.243 } 00:32:41.243 ], 00:32:41.243 "driver_specific": {} 00:32:41.243 } 00:32:41.243 ] 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.243 BaseBdev4 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:41.243 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:41.244 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:41.244 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.244 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.244 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.244 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:41.244 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.244 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.244 [ 00:32:41.244 { 00:32:41.244 "name": "BaseBdev4", 00:32:41.244 "aliases": [ 00:32:41.244 "abc7e2d1-96c2-475a-a01e-7fdd90f19449" 00:32:41.244 ], 00:32:41.244 "product_name": "Malloc disk", 00:32:41.244 "block_size": 512, 00:32:41.244 "num_blocks": 65536, 00:32:41.244 "uuid": "abc7e2d1-96c2-475a-a01e-7fdd90f19449", 00:32:41.244 "assigned_rate_limits": { 00:32:41.244 "rw_ios_per_sec": 0, 00:32:41.244 "rw_mbytes_per_sec": 0, 00:32:41.244 "r_mbytes_per_sec": 0, 00:32:41.244 "w_mbytes_per_sec": 0 00:32:41.244 }, 00:32:41.244 "claimed": false, 00:32:41.244 "zoned": false, 00:32:41.244 "supported_io_types": { 00:32:41.244 "read": true, 00:32:41.244 "write": true, 00:32:41.244 "unmap": true, 00:32:41.244 "flush": true, 00:32:41.244 "reset": true, 00:32:41.244 "nvme_admin": false, 00:32:41.503 "nvme_io": false, 00:32:41.503 "nvme_io_md": false, 00:32:41.503 "write_zeroes": true, 00:32:41.503 "zcopy": true, 00:32:41.503 "get_zone_info": false, 00:32:41.503 "zone_management": false, 00:32:41.503 "zone_append": false, 00:32:41.503 "compare": false, 00:32:41.503 "compare_and_write": false, 00:32:41.503 "abort": true, 00:32:41.503 "seek_hole": false, 00:32:41.503 "seek_data": false, 00:32:41.503 "copy": true, 00:32:41.503 "nvme_iov_md": false 00:32:41.503 }, 00:32:41.503 "memory_domains": [ 00:32:41.503 { 00:32:41.503 "dma_device_id": "system", 00:32:41.503 "dma_device_type": 1 00:32:41.503 }, 00:32:41.503 { 00:32:41.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:41.503 "dma_device_type": 2 00:32:41.503 } 00:32:41.503 ], 00:32:41.503 "driver_specific": {} 00:32:41.503 } 00:32:41.503 ] 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.503 [2024-10-28 13:43:55.414196] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:41.503 [2024-10-28 13:43:55.414296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:41.503 [2024-10-28 13:43:55.414325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:41.503 [2024-10-28 13:43:55.416962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:41.503 [2024-10-28 13:43:55.417029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.503 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:41.503 "name": "Existed_Raid", 00:32:41.503 "uuid": "66f6a6a7-712b-4f6e-bbe9-48b5d3a5444d", 00:32:41.503 "strip_size_kb": 64, 00:32:41.503 "state": "configuring", 00:32:41.503 "raid_level": "raid5f", 00:32:41.503 "superblock": true, 00:32:41.503 "num_base_bdevs": 4, 00:32:41.503 "num_base_bdevs_discovered": 3, 00:32:41.503 "num_base_bdevs_operational": 4, 00:32:41.503 "base_bdevs_list": [ 00:32:41.503 { 00:32:41.503 "name": "BaseBdev1", 00:32:41.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.503 "is_configured": false, 00:32:41.503 "data_offset": 0, 00:32:41.503 "data_size": 0 00:32:41.503 }, 00:32:41.503 { 00:32:41.503 "name": "BaseBdev2", 00:32:41.503 "uuid": "02b2f0c2-ba70-4f0d-b905-1b0d19c3d4f8", 00:32:41.503 "is_configured": true, 00:32:41.503 "data_offset": 2048, 00:32:41.503 "data_size": 63488 00:32:41.503 }, 00:32:41.503 { 00:32:41.503 "name": "BaseBdev3", 00:32:41.503 "uuid": "fb89e91d-0b9a-4561-a727-22502e111941", 00:32:41.503 "is_configured": true, 00:32:41.503 "data_offset": 2048, 00:32:41.503 "data_size": 63488 00:32:41.503 }, 00:32:41.503 { 00:32:41.503 "name": "BaseBdev4", 00:32:41.503 "uuid": "abc7e2d1-96c2-475a-a01e-7fdd90f19449", 00:32:41.503 "is_configured": true, 00:32:41.503 "data_offset": 2048, 00:32:41.503 "data_size": 63488 00:32:41.503 } 00:32:41.503 ] 00:32:41.503 }' 00:32:41.504 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:41.504 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.071 [2024-10-28 13:43:55.938354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:42.071 "name": "Existed_Raid", 00:32:42.071 "uuid": "66f6a6a7-712b-4f6e-bbe9-48b5d3a5444d", 00:32:42.071 "strip_size_kb": 64, 00:32:42.071 "state": "configuring", 00:32:42.071 "raid_level": "raid5f", 00:32:42.071 "superblock": true, 00:32:42.071 "num_base_bdevs": 4, 00:32:42.071 "num_base_bdevs_discovered": 2, 00:32:42.071 "num_base_bdevs_operational": 4, 00:32:42.071 "base_bdevs_list": [ 00:32:42.071 { 00:32:42.071 "name": "BaseBdev1", 00:32:42.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:42.071 "is_configured": false, 00:32:42.071 "data_offset": 0, 00:32:42.071 "data_size": 0 00:32:42.071 }, 00:32:42.071 { 00:32:42.071 "name": null, 00:32:42.071 "uuid": "02b2f0c2-ba70-4f0d-b905-1b0d19c3d4f8", 00:32:42.071 "is_configured": false, 00:32:42.071 "data_offset": 0, 00:32:42.071 "data_size": 63488 00:32:42.071 }, 00:32:42.071 { 00:32:42.071 "name": "BaseBdev3", 00:32:42.071 "uuid": "fb89e91d-0b9a-4561-a727-22502e111941", 00:32:42.071 "is_configured": true, 00:32:42.071 "data_offset": 2048, 00:32:42.071 "data_size": 63488 00:32:42.071 }, 00:32:42.071 { 00:32:42.071 "name": "BaseBdev4", 00:32:42.071 "uuid": "abc7e2d1-96c2-475a-a01e-7fdd90f19449", 00:32:42.071 "is_configured": true, 00:32:42.071 "data_offset": 2048, 00:32:42.071 "data_size": 63488 00:32:42.071 } 00:32:42.071 ] 00:32:42.071 }' 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:42.071 13:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.330 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:42.330 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.330 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.330 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.330 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.589 [2024-10-28 13:43:56.537073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:42.589 BaseBdev1 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.589 [ 00:32:42.589 { 00:32:42.589 "name": "BaseBdev1", 00:32:42.589 "aliases": [ 00:32:42.589 "6db17c7d-e296-4236-9190-1f4630dd5751" 00:32:42.589 ], 00:32:42.589 "product_name": "Malloc disk", 00:32:42.589 "block_size": 512, 00:32:42.589 "num_blocks": 65536, 00:32:42.589 "uuid": "6db17c7d-e296-4236-9190-1f4630dd5751", 00:32:42.589 "assigned_rate_limits": { 00:32:42.589 "rw_ios_per_sec": 0, 00:32:42.589 "rw_mbytes_per_sec": 0, 00:32:42.589 "r_mbytes_per_sec": 0, 00:32:42.589 "w_mbytes_per_sec": 0 00:32:42.589 }, 00:32:42.589 "claimed": true, 00:32:42.589 "claim_type": "exclusive_write", 00:32:42.589 "zoned": false, 00:32:42.589 "supported_io_types": { 00:32:42.589 "read": true, 00:32:42.589 "write": true, 00:32:42.589 "unmap": true, 00:32:42.589 "flush": true, 00:32:42.589 "reset": true, 00:32:42.589 "nvme_admin": false, 00:32:42.589 "nvme_io": false, 00:32:42.589 "nvme_io_md": false, 00:32:42.589 "write_zeroes": true, 00:32:42.589 "zcopy": true, 00:32:42.589 "get_zone_info": false, 00:32:42.589 "zone_management": false, 00:32:42.589 "zone_append": false, 00:32:42.589 "compare": false, 00:32:42.589 "compare_and_write": false, 00:32:42.589 "abort": true, 00:32:42.589 "seek_hole": false, 00:32:42.589 "seek_data": false, 00:32:42.589 "copy": true, 00:32:42.589 "nvme_iov_md": false 00:32:42.589 }, 00:32:42.589 "memory_domains": [ 00:32:42.589 { 00:32:42.589 "dma_device_id": "system", 00:32:42.589 "dma_device_type": 1 00:32:42.589 }, 00:32:42.589 { 00:32:42.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:42.589 "dma_device_type": 2 00:32:42.589 } 00:32:42.589 ], 00:32:42.589 "driver_specific": {} 00:32:42.589 } 00:32:42.589 ] 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.589 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:42.590 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.590 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.590 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.590 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:42.590 "name": "Existed_Raid", 00:32:42.590 "uuid": "66f6a6a7-712b-4f6e-bbe9-48b5d3a5444d", 00:32:42.590 "strip_size_kb": 64, 00:32:42.590 "state": "configuring", 00:32:42.590 "raid_level": "raid5f", 00:32:42.590 "superblock": true, 00:32:42.590 "num_base_bdevs": 4, 00:32:42.590 "num_base_bdevs_discovered": 3, 00:32:42.590 "num_base_bdevs_operational": 4, 00:32:42.590 "base_bdevs_list": [ 00:32:42.590 { 00:32:42.590 "name": "BaseBdev1", 00:32:42.590 "uuid": "6db17c7d-e296-4236-9190-1f4630dd5751", 00:32:42.590 "is_configured": true, 00:32:42.590 "data_offset": 2048, 00:32:42.590 "data_size": 63488 00:32:42.590 }, 00:32:42.590 { 00:32:42.590 "name": null, 00:32:42.590 "uuid": "02b2f0c2-ba70-4f0d-b905-1b0d19c3d4f8", 00:32:42.590 "is_configured": false, 00:32:42.590 "data_offset": 0, 00:32:42.590 "data_size": 63488 00:32:42.590 }, 00:32:42.590 { 00:32:42.590 "name": "BaseBdev3", 00:32:42.590 "uuid": "fb89e91d-0b9a-4561-a727-22502e111941", 00:32:42.590 "is_configured": true, 00:32:42.590 "data_offset": 2048, 00:32:42.590 "data_size": 63488 00:32:42.590 }, 00:32:42.590 { 00:32:42.590 "name": "BaseBdev4", 00:32:42.590 "uuid": "abc7e2d1-96c2-475a-a01e-7fdd90f19449", 00:32:42.590 "is_configured": true, 00:32:42.590 "data_offset": 2048, 00:32:42.590 "data_size": 63488 00:32:42.590 } 00:32:42.590 ] 00:32:42.590 }' 00:32:42.590 13:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:42.590 13:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.158 [2024-10-28 13:43:57.157379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:43.158 "name": "Existed_Raid", 00:32:43.158 "uuid": "66f6a6a7-712b-4f6e-bbe9-48b5d3a5444d", 00:32:43.158 "strip_size_kb": 64, 00:32:43.158 "state": "configuring", 00:32:43.158 "raid_level": "raid5f", 00:32:43.158 "superblock": true, 00:32:43.158 "num_base_bdevs": 4, 00:32:43.158 "num_base_bdevs_discovered": 2, 00:32:43.158 "num_base_bdevs_operational": 4, 00:32:43.158 "base_bdevs_list": [ 00:32:43.158 { 00:32:43.158 "name": "BaseBdev1", 00:32:43.158 "uuid": "6db17c7d-e296-4236-9190-1f4630dd5751", 00:32:43.158 "is_configured": true, 00:32:43.158 "data_offset": 2048, 00:32:43.158 "data_size": 63488 00:32:43.158 }, 00:32:43.158 { 00:32:43.158 "name": null, 00:32:43.158 "uuid": "02b2f0c2-ba70-4f0d-b905-1b0d19c3d4f8", 00:32:43.158 "is_configured": false, 00:32:43.158 "data_offset": 0, 00:32:43.158 "data_size": 63488 00:32:43.158 }, 00:32:43.158 { 00:32:43.158 "name": null, 00:32:43.158 "uuid": "fb89e91d-0b9a-4561-a727-22502e111941", 00:32:43.158 "is_configured": false, 00:32:43.158 "data_offset": 0, 00:32:43.158 "data_size": 63488 00:32:43.158 }, 00:32:43.158 { 00:32:43.158 "name": "BaseBdev4", 00:32:43.158 "uuid": "abc7e2d1-96c2-475a-a01e-7fdd90f19449", 00:32:43.158 "is_configured": true, 00:32:43.158 "data_offset": 2048, 00:32:43.158 "data_size": 63488 00:32:43.158 } 00:32:43.158 ] 00:32:43.158 }' 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:43.158 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.724 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:43.724 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.724 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.724 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:43.724 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.724 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:32:43.724 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:43.724 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.724 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.724 [2024-10-28 13:43:57.737590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:43.725 "name": "Existed_Raid", 00:32:43.725 "uuid": "66f6a6a7-712b-4f6e-bbe9-48b5d3a5444d", 00:32:43.725 "strip_size_kb": 64, 00:32:43.725 "state": "configuring", 00:32:43.725 "raid_level": "raid5f", 00:32:43.725 "superblock": true, 00:32:43.725 "num_base_bdevs": 4, 00:32:43.725 "num_base_bdevs_discovered": 3, 00:32:43.725 "num_base_bdevs_operational": 4, 00:32:43.725 "base_bdevs_list": [ 00:32:43.725 { 00:32:43.725 "name": "BaseBdev1", 00:32:43.725 "uuid": "6db17c7d-e296-4236-9190-1f4630dd5751", 00:32:43.725 "is_configured": true, 00:32:43.725 "data_offset": 2048, 00:32:43.725 "data_size": 63488 00:32:43.725 }, 00:32:43.725 { 00:32:43.725 "name": null, 00:32:43.725 "uuid": "02b2f0c2-ba70-4f0d-b905-1b0d19c3d4f8", 00:32:43.725 "is_configured": false, 00:32:43.725 "data_offset": 0, 00:32:43.725 "data_size": 63488 00:32:43.725 }, 00:32:43.725 { 00:32:43.725 "name": "BaseBdev3", 00:32:43.725 "uuid": "fb89e91d-0b9a-4561-a727-22502e111941", 00:32:43.725 "is_configured": true, 00:32:43.725 "data_offset": 2048, 00:32:43.725 "data_size": 63488 00:32:43.725 }, 00:32:43.725 { 00:32:43.725 "name": "BaseBdev4", 00:32:43.725 "uuid": "abc7e2d1-96c2-475a-a01e-7fdd90f19449", 00:32:43.725 "is_configured": true, 00:32:43.725 "data_offset": 2048, 00:32:43.725 "data_size": 63488 00:32:43.725 } 00:32:43.725 ] 00:32:43.725 }' 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:43.725 13:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.322 [2024-10-28 13:43:58.309747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:44.322 "name": "Existed_Raid", 00:32:44.322 "uuid": "66f6a6a7-712b-4f6e-bbe9-48b5d3a5444d", 00:32:44.322 "strip_size_kb": 64, 00:32:44.322 "state": "configuring", 00:32:44.322 "raid_level": "raid5f", 00:32:44.322 "superblock": true, 00:32:44.322 "num_base_bdevs": 4, 00:32:44.322 "num_base_bdevs_discovered": 2, 00:32:44.322 "num_base_bdevs_operational": 4, 00:32:44.322 "base_bdevs_list": [ 00:32:44.322 { 00:32:44.322 "name": null, 00:32:44.322 "uuid": "6db17c7d-e296-4236-9190-1f4630dd5751", 00:32:44.322 "is_configured": false, 00:32:44.322 "data_offset": 0, 00:32:44.322 "data_size": 63488 00:32:44.322 }, 00:32:44.322 { 00:32:44.322 "name": null, 00:32:44.322 "uuid": "02b2f0c2-ba70-4f0d-b905-1b0d19c3d4f8", 00:32:44.322 "is_configured": false, 00:32:44.322 "data_offset": 0, 00:32:44.322 "data_size": 63488 00:32:44.322 }, 00:32:44.322 { 00:32:44.322 "name": "BaseBdev3", 00:32:44.322 "uuid": "fb89e91d-0b9a-4561-a727-22502e111941", 00:32:44.322 "is_configured": true, 00:32:44.322 "data_offset": 2048, 00:32:44.322 "data_size": 63488 00:32:44.322 }, 00:32:44.322 { 00:32:44.322 "name": "BaseBdev4", 00:32:44.322 "uuid": "abc7e2d1-96c2-475a-a01e-7fdd90f19449", 00:32:44.322 "is_configured": true, 00:32:44.322 "data_offset": 2048, 00:32:44.322 "data_size": 63488 00:32:44.322 } 00:32:44.322 ] 00:32:44.322 }' 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:44.322 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.891 [2024-10-28 13:43:58.904927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:44.891 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:44.892 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:44.892 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.892 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:44.892 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.892 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.892 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.892 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:44.892 "name": "Existed_Raid", 00:32:44.892 "uuid": "66f6a6a7-712b-4f6e-bbe9-48b5d3a5444d", 00:32:44.892 "strip_size_kb": 64, 00:32:44.892 "state": "configuring", 00:32:44.892 "raid_level": "raid5f", 00:32:44.892 "superblock": true, 00:32:44.892 "num_base_bdevs": 4, 00:32:44.892 "num_base_bdevs_discovered": 3, 00:32:44.892 "num_base_bdevs_operational": 4, 00:32:44.892 "base_bdevs_list": [ 00:32:44.892 { 00:32:44.892 "name": null, 00:32:44.892 "uuid": "6db17c7d-e296-4236-9190-1f4630dd5751", 00:32:44.892 "is_configured": false, 00:32:44.892 "data_offset": 0, 00:32:44.892 "data_size": 63488 00:32:44.892 }, 00:32:44.892 { 00:32:44.892 "name": "BaseBdev2", 00:32:44.892 "uuid": "02b2f0c2-ba70-4f0d-b905-1b0d19c3d4f8", 00:32:44.892 "is_configured": true, 00:32:44.892 "data_offset": 2048, 00:32:44.892 "data_size": 63488 00:32:44.892 }, 00:32:44.892 { 00:32:44.892 "name": "BaseBdev3", 00:32:44.892 "uuid": "fb89e91d-0b9a-4561-a727-22502e111941", 00:32:44.892 "is_configured": true, 00:32:44.892 "data_offset": 2048, 00:32:44.892 "data_size": 63488 00:32:44.892 }, 00:32:44.892 { 00:32:44.892 "name": "BaseBdev4", 00:32:44.892 "uuid": "abc7e2d1-96c2-475a-a01e-7fdd90f19449", 00:32:44.892 "is_configured": true, 00:32:44.892 "data_offset": 2048, 00:32:44.892 "data_size": 63488 00:32:44.892 } 00:32:44.892 ] 00:32:44.892 }' 00:32:44.892 13:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:44.892 13:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6db17c7d-e296-4236-9190-1f4630dd5751 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.459 [2024-10-28 13:43:59.550951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:45.459 [2024-10-28 13:43:59.551227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:45.459 [2024-10-28 13:43:59.551256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:45.459 NewBaseBdev 00:32:45.459 [2024-10-28 13:43:59.551581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:32:45.459 [2024-10-28 13:43:59.552182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:45.459 [2024-10-28 13:43:59.552222] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.459 [2024-10-28 13:43:59.552376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.459 [ 00:32:45.459 { 00:32:45.459 "name": "NewBaseBdev", 00:32:45.459 "aliases": [ 00:32:45.459 "6db17c7d-e296-4236-9190-1f4630dd5751" 00:32:45.459 ], 00:32:45.459 "product_name": "Malloc disk", 00:32:45.459 "block_size": 512, 00:32:45.459 "num_blocks": 65536, 00:32:45.459 "uuid": "6db17c7d-e296-4236-9190-1f4630dd5751", 00:32:45.459 "assigned_rate_limits": { 00:32:45.459 "rw_ios_per_sec": 0, 00:32:45.459 "rw_mbytes_per_sec": 0, 00:32:45.459 "r_mbytes_per_sec": 0, 00:32:45.459 "w_mbytes_per_sec": 0 00:32:45.459 }, 00:32:45.459 "claimed": true, 00:32:45.459 "claim_type": "exclusive_write", 00:32:45.459 "zoned": false, 00:32:45.459 "supported_io_types": { 00:32:45.459 "read": true, 00:32:45.459 "write": true, 00:32:45.459 "unmap": true, 00:32:45.459 "flush": true, 00:32:45.459 "reset": true, 00:32:45.459 "nvme_admin": false, 00:32:45.459 "nvme_io": false, 00:32:45.459 "nvme_io_md": false, 00:32:45.459 "write_zeroes": true, 00:32:45.459 "zcopy": true, 00:32:45.459 "get_zone_info": false, 00:32:45.459 "zone_management": false, 00:32:45.459 "zone_append": false, 00:32:45.459 "compare": false, 00:32:45.459 "compare_and_write": false, 00:32:45.459 "abort": true, 00:32:45.459 "seek_hole": false, 00:32:45.459 "seek_data": false, 00:32:45.459 "copy": true, 00:32:45.459 "nvme_iov_md": false 00:32:45.459 }, 00:32:45.459 "memory_domains": [ 00:32:45.459 { 00:32:45.459 "dma_device_id": "system", 00:32:45.459 "dma_device_type": 1 00:32:45.459 }, 00:32:45.459 { 00:32:45.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:45.459 "dma_device_type": 2 00:32:45.459 } 00:32:45.459 ], 00:32:45.459 "driver_specific": {} 00:32:45.459 } 00:32:45.459 ] 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.459 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.718 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:45.718 "name": "Existed_Raid", 00:32:45.718 "uuid": "66f6a6a7-712b-4f6e-bbe9-48b5d3a5444d", 00:32:45.718 "strip_size_kb": 64, 00:32:45.718 "state": "online", 00:32:45.718 "raid_level": "raid5f", 00:32:45.718 "superblock": true, 00:32:45.718 "num_base_bdevs": 4, 00:32:45.718 "num_base_bdevs_discovered": 4, 00:32:45.718 "num_base_bdevs_operational": 4, 00:32:45.718 "base_bdevs_list": [ 00:32:45.718 { 00:32:45.718 "name": "NewBaseBdev", 00:32:45.718 "uuid": "6db17c7d-e296-4236-9190-1f4630dd5751", 00:32:45.718 "is_configured": true, 00:32:45.718 "data_offset": 2048, 00:32:45.718 "data_size": 63488 00:32:45.718 }, 00:32:45.718 { 00:32:45.718 "name": "BaseBdev2", 00:32:45.718 "uuid": "02b2f0c2-ba70-4f0d-b905-1b0d19c3d4f8", 00:32:45.718 "is_configured": true, 00:32:45.718 "data_offset": 2048, 00:32:45.718 "data_size": 63488 00:32:45.718 }, 00:32:45.718 { 00:32:45.718 "name": "BaseBdev3", 00:32:45.718 "uuid": "fb89e91d-0b9a-4561-a727-22502e111941", 00:32:45.718 "is_configured": true, 00:32:45.718 "data_offset": 2048, 00:32:45.718 "data_size": 63488 00:32:45.718 }, 00:32:45.718 { 00:32:45.718 "name": "BaseBdev4", 00:32:45.718 "uuid": "abc7e2d1-96c2-475a-a01e-7fdd90f19449", 00:32:45.718 "is_configured": true, 00:32:45.718 "data_offset": 2048, 00:32:45.718 "data_size": 63488 00:32:45.718 } 00:32:45.718 ] 00:32:45.718 }' 00:32:45.718 13:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:45.718 13:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.976 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:32:45.976 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:45.976 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:45.976 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:45.976 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:45.976 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:45.976 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:45.976 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:45.976 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.976 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.976 [2024-10-28 13:44:00.111536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:45.976 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:46.235 "name": "Existed_Raid", 00:32:46.235 "aliases": [ 00:32:46.235 "66f6a6a7-712b-4f6e-bbe9-48b5d3a5444d" 00:32:46.235 ], 00:32:46.235 "product_name": "Raid Volume", 00:32:46.235 "block_size": 512, 00:32:46.235 "num_blocks": 190464, 00:32:46.235 "uuid": "66f6a6a7-712b-4f6e-bbe9-48b5d3a5444d", 00:32:46.235 "assigned_rate_limits": { 00:32:46.235 "rw_ios_per_sec": 0, 00:32:46.235 "rw_mbytes_per_sec": 0, 00:32:46.235 "r_mbytes_per_sec": 0, 00:32:46.235 "w_mbytes_per_sec": 0 00:32:46.235 }, 00:32:46.235 "claimed": false, 00:32:46.235 "zoned": false, 00:32:46.235 "supported_io_types": { 00:32:46.235 "read": true, 00:32:46.235 "write": true, 00:32:46.235 "unmap": false, 00:32:46.235 "flush": false, 00:32:46.235 "reset": true, 00:32:46.235 "nvme_admin": false, 00:32:46.235 "nvme_io": false, 00:32:46.235 "nvme_io_md": false, 00:32:46.235 "write_zeroes": true, 00:32:46.235 "zcopy": false, 00:32:46.235 "get_zone_info": false, 00:32:46.235 "zone_management": false, 00:32:46.235 "zone_append": false, 00:32:46.235 "compare": false, 00:32:46.235 "compare_and_write": false, 00:32:46.235 "abort": false, 00:32:46.235 "seek_hole": false, 00:32:46.235 "seek_data": false, 00:32:46.235 "copy": false, 00:32:46.235 "nvme_iov_md": false 00:32:46.235 }, 00:32:46.235 "driver_specific": { 00:32:46.235 "raid": { 00:32:46.235 "uuid": "66f6a6a7-712b-4f6e-bbe9-48b5d3a5444d", 00:32:46.235 "strip_size_kb": 64, 00:32:46.235 "state": "online", 00:32:46.235 "raid_level": "raid5f", 00:32:46.235 "superblock": true, 00:32:46.235 "num_base_bdevs": 4, 00:32:46.235 "num_base_bdevs_discovered": 4, 00:32:46.235 "num_base_bdevs_operational": 4, 00:32:46.235 "base_bdevs_list": [ 00:32:46.235 { 00:32:46.235 "name": "NewBaseBdev", 00:32:46.235 "uuid": "6db17c7d-e296-4236-9190-1f4630dd5751", 00:32:46.235 "is_configured": true, 00:32:46.235 "data_offset": 2048, 00:32:46.235 "data_size": 63488 00:32:46.235 }, 00:32:46.235 { 00:32:46.235 "name": "BaseBdev2", 00:32:46.235 "uuid": "02b2f0c2-ba70-4f0d-b905-1b0d19c3d4f8", 00:32:46.235 "is_configured": true, 00:32:46.235 "data_offset": 2048, 00:32:46.235 "data_size": 63488 00:32:46.235 }, 00:32:46.235 { 00:32:46.235 "name": "BaseBdev3", 00:32:46.235 "uuid": "fb89e91d-0b9a-4561-a727-22502e111941", 00:32:46.235 "is_configured": true, 00:32:46.235 "data_offset": 2048, 00:32:46.235 "data_size": 63488 00:32:46.235 }, 00:32:46.235 { 00:32:46.235 "name": "BaseBdev4", 00:32:46.235 "uuid": "abc7e2d1-96c2-475a-a01e-7fdd90f19449", 00:32:46.235 "is_configured": true, 00:32:46.235 "data_offset": 2048, 00:32:46.235 "data_size": 63488 00:32:46.235 } 00:32:46.235 ] 00:32:46.235 } 00:32:46.235 } 00:32:46.235 }' 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:32:46.235 BaseBdev2 00:32:46.235 BaseBdev3 00:32:46.235 BaseBdev4' 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.235 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.494 [2024-10-28 13:44:00.479328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:46.494 [2024-10-28 13:44:00.479366] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:46.494 [2024-10-28 13:44:00.479492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:46.494 [2024-10-28 13:44:00.479837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:46.494 [2024-10-28 13:44:00.479864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 96174 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 96174 ']' 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 96174 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96174 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:46.494 killing process with pid 96174 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96174' 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 96174 00:32:46.494 [2024-10-28 13:44:00.517446] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:46.494 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 96174 00:32:46.494 [2024-10-28 13:44:00.558645] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:46.753 13:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:32:46.753 ************************************ 00:32:46.753 END TEST raid5f_state_function_test_sb 00:32:46.753 ************************************ 00:32:46.753 00:32:46.753 real 0m11.178s 00:32:46.753 user 0m19.848s 00:32:46.753 sys 0m1.634s 00:32:46.753 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:46.753 13:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.753 13:44:00 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:32:46.753 13:44:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:46.753 13:44:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:46.753 13:44:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:46.753 ************************************ 00:32:46.753 START TEST raid5f_superblock_test 00:32:46.753 ************************************ 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=96841 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 96841 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 96841 ']' 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:46.753 13:44:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.012 [2024-10-28 13:44:00.964169] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:32:47.012 [2024-10-28 13:44:00.964381] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96841 ] 00:32:47.012 [2024-10-28 13:44:01.109093] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:47.012 [2024-10-28 13:44:01.141410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.270 [2024-10-28 13:44:01.193927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.270 [2024-10-28 13:44:01.255951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:47.270 [2024-10-28 13:44:01.256000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:47.838 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:47.838 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:32:47.838 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.839 malloc1 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.839 [2024-10-28 13:44:01.928416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:47.839 [2024-10-28 13:44:01.928506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.839 [2024-10-28 13:44:01.928560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:47.839 [2024-10-28 13:44:01.928585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.839 [2024-10-28 13:44:01.931739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.839 [2024-10-28 13:44:01.931786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:47.839 pt1 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.839 malloc2 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.839 [2024-10-28 13:44:01.961417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:47.839 [2024-10-28 13:44:01.961483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.839 [2024-10-28 13:44:01.961514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:47.839 [2024-10-28 13:44:01.961530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.839 [2024-10-28 13:44:01.964455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.839 [2024-10-28 13:44:01.964515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:47.839 pt2 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.839 malloc3 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.839 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.839 [2024-10-28 13:44:01.995135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:47.839 [2024-10-28 13:44:01.995253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.839 [2024-10-28 13:44:01.995300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:47.839 [2024-10-28 13:44:01.995327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:48.098 [2024-10-28 13:44:01.999082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:48.098 [2024-10-28 13:44:01.999190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:48.098 pt3 00:32:48.098 13:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.098 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:48.098 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:48.098 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:32:48.098 13:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.098 malloc4 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.098 [2024-10-28 13:44:02.040133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:48.098 [2024-10-28 13:44:02.040267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:48.098 [2024-10-28 13:44:02.040312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:48.098 [2024-10-28 13:44:02.040337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:48.098 [2024-10-28 13:44:02.044313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:48.098 [2024-10-28 13:44:02.044395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:48.098 pt4 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.098 [2024-10-28 13:44:02.052740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:48.098 [2024-10-28 13:44:02.055556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:48.098 [2024-10-28 13:44:02.055664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:48.098 [2024-10-28 13:44:02.055761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:48.098 [2024-10-28 13:44:02.056052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:32:48.098 [2024-10-28 13:44:02.056073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:48.098 [2024-10-28 13:44:02.056481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:32:48.098 [2024-10-28 13:44:02.057169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:32:48.098 [2024-10-28 13:44:02.057208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:32:48.098 [2024-10-28 13:44:02.057442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.098 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:48.098 "name": "raid_bdev1", 00:32:48.098 "uuid": "b0a570f7-7f68-4db4-8fa9-9e65b76fca95", 00:32:48.098 "strip_size_kb": 64, 00:32:48.098 "state": "online", 00:32:48.098 "raid_level": "raid5f", 00:32:48.098 "superblock": true, 00:32:48.099 "num_base_bdevs": 4, 00:32:48.099 "num_base_bdevs_discovered": 4, 00:32:48.099 "num_base_bdevs_operational": 4, 00:32:48.099 "base_bdevs_list": [ 00:32:48.099 { 00:32:48.099 "name": "pt1", 00:32:48.099 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:48.099 "is_configured": true, 00:32:48.099 "data_offset": 2048, 00:32:48.099 "data_size": 63488 00:32:48.099 }, 00:32:48.099 { 00:32:48.099 "name": "pt2", 00:32:48.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:48.099 "is_configured": true, 00:32:48.099 "data_offset": 2048, 00:32:48.099 "data_size": 63488 00:32:48.099 }, 00:32:48.099 { 00:32:48.099 "name": "pt3", 00:32:48.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:48.099 "is_configured": true, 00:32:48.099 "data_offset": 2048, 00:32:48.099 "data_size": 63488 00:32:48.099 }, 00:32:48.099 { 00:32:48.099 "name": "pt4", 00:32:48.099 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:48.099 "is_configured": true, 00:32:48.099 "data_offset": 2048, 00:32:48.099 "data_size": 63488 00:32:48.099 } 00:32:48.099 ] 00:32:48.099 }' 00:32:48.099 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:48.099 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:48.666 [2024-10-28 13:44:02.573890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:48.666 "name": "raid_bdev1", 00:32:48.666 "aliases": [ 00:32:48.666 "b0a570f7-7f68-4db4-8fa9-9e65b76fca95" 00:32:48.666 ], 00:32:48.666 "product_name": "Raid Volume", 00:32:48.666 "block_size": 512, 00:32:48.666 "num_blocks": 190464, 00:32:48.666 "uuid": "b0a570f7-7f68-4db4-8fa9-9e65b76fca95", 00:32:48.666 "assigned_rate_limits": { 00:32:48.666 "rw_ios_per_sec": 0, 00:32:48.666 "rw_mbytes_per_sec": 0, 00:32:48.666 "r_mbytes_per_sec": 0, 00:32:48.666 "w_mbytes_per_sec": 0 00:32:48.666 }, 00:32:48.666 "claimed": false, 00:32:48.666 "zoned": false, 00:32:48.666 "supported_io_types": { 00:32:48.666 "read": true, 00:32:48.666 "write": true, 00:32:48.666 "unmap": false, 00:32:48.666 "flush": false, 00:32:48.666 "reset": true, 00:32:48.666 "nvme_admin": false, 00:32:48.666 "nvme_io": false, 00:32:48.666 "nvme_io_md": false, 00:32:48.666 "write_zeroes": true, 00:32:48.666 "zcopy": false, 00:32:48.666 "get_zone_info": false, 00:32:48.666 "zone_management": false, 00:32:48.666 "zone_append": false, 00:32:48.666 "compare": false, 00:32:48.666 "compare_and_write": false, 00:32:48.666 "abort": false, 00:32:48.666 "seek_hole": false, 00:32:48.666 "seek_data": false, 00:32:48.666 "copy": false, 00:32:48.666 "nvme_iov_md": false 00:32:48.666 }, 00:32:48.666 "driver_specific": { 00:32:48.666 "raid": { 00:32:48.666 "uuid": "b0a570f7-7f68-4db4-8fa9-9e65b76fca95", 00:32:48.666 "strip_size_kb": 64, 00:32:48.666 "state": "online", 00:32:48.666 "raid_level": "raid5f", 00:32:48.666 "superblock": true, 00:32:48.666 "num_base_bdevs": 4, 00:32:48.666 "num_base_bdevs_discovered": 4, 00:32:48.666 "num_base_bdevs_operational": 4, 00:32:48.666 "base_bdevs_list": [ 00:32:48.666 { 00:32:48.666 "name": "pt1", 00:32:48.666 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:48.666 "is_configured": true, 00:32:48.666 "data_offset": 2048, 00:32:48.666 "data_size": 63488 00:32:48.666 }, 00:32:48.666 { 00:32:48.666 "name": "pt2", 00:32:48.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:48.666 "is_configured": true, 00:32:48.666 "data_offset": 2048, 00:32:48.666 "data_size": 63488 00:32:48.666 }, 00:32:48.666 { 00:32:48.666 "name": "pt3", 00:32:48.666 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:48.666 "is_configured": true, 00:32:48.666 "data_offset": 2048, 00:32:48.666 "data_size": 63488 00:32:48.666 }, 00:32:48.666 { 00:32:48.666 "name": "pt4", 00:32:48.666 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:48.666 "is_configured": true, 00:32:48.666 "data_offset": 2048, 00:32:48.666 "data_size": 63488 00:32:48.666 } 00:32:48.666 ] 00:32:48.666 } 00:32:48.666 } 00:32:48.666 }' 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:48.666 pt2 00:32:48.666 pt3 00:32:48.666 pt4' 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.666 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:32:48.925 [2024-10-28 13:44:02.945920] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b0a570f7-7f68-4db4-8fa9-9e65b76fca95 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b0a570f7-7f68-4db4-8fa9-9e65b76fca95 ']' 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.925 13:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.925 [2024-10-28 13:44:03.001685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:48.925 [2024-10-28 13:44:03.001724] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:48.925 [2024-10-28 13:44:03.001867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:48.925 [2024-10-28 13:44:03.002026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:48.925 [2024-10-28 13:44:03.002052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:48.925 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:32:48.926 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.926 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.184 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.184 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:49.184 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:32:49.184 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.184 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.184 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.185 [2024-10-28 13:44:03.169896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:49.185 [2024-10-28 13:44:03.173303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:49.185 [2024-10-28 13:44:03.173400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:32:49.185 [2024-10-28 13:44:03.173477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:32:49.185 [2024-10-28 13:44:03.173570] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:49.185 [2024-10-28 13:44:03.173666] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:49.185 [2024-10-28 13:44:03.173714] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:32:49.185 [2024-10-28 13:44:03.173767] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:32:49.185 [2024-10-28 13:44:03.173804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:49.185 [2024-10-28 13:44:03.173836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:32:49.185 request: 00:32:49.185 { 00:32:49.185 "name": "raid_bdev1", 00:32:49.185 "raid_level": "raid5f", 00:32:49.185 "base_bdevs": [ 00:32:49.185 "malloc1", 00:32:49.185 "malloc2", 00:32:49.185 "malloc3", 00:32:49.185 "malloc4" 00:32:49.185 ], 00:32:49.185 "strip_size_kb": 64, 00:32:49.185 "superblock": false, 00:32:49.185 "method": "bdev_raid_create", 00:32:49.185 "req_id": 1 00:32:49.185 } 00:32:49.185 Got JSON-RPC error response 00:32:49.185 response: 00:32:49.185 { 00:32:49.185 "code": -17, 00:32:49.185 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:49.185 } 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.185 [2024-10-28 13:44:03.234124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:49.185 [2024-10-28 13:44:03.234248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.185 [2024-10-28 13:44:03.234292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:49.185 [2024-10-28 13:44:03.234330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.185 [2024-10-28 13:44:03.237497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.185 [2024-10-28 13:44:03.237549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:49.185 [2024-10-28 13:44:03.237637] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:49.185 [2024-10-28 13:44:03.237696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:49.185 pt1 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:49.185 "name": "raid_bdev1", 00:32:49.185 "uuid": "b0a570f7-7f68-4db4-8fa9-9e65b76fca95", 00:32:49.185 "strip_size_kb": 64, 00:32:49.185 "state": "configuring", 00:32:49.185 "raid_level": "raid5f", 00:32:49.185 "superblock": true, 00:32:49.185 "num_base_bdevs": 4, 00:32:49.185 "num_base_bdevs_discovered": 1, 00:32:49.185 "num_base_bdevs_operational": 4, 00:32:49.185 "base_bdevs_list": [ 00:32:49.185 { 00:32:49.185 "name": "pt1", 00:32:49.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:49.185 "is_configured": true, 00:32:49.185 "data_offset": 2048, 00:32:49.185 "data_size": 63488 00:32:49.185 }, 00:32:49.185 { 00:32:49.185 "name": null, 00:32:49.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:49.185 "is_configured": false, 00:32:49.185 "data_offset": 2048, 00:32:49.185 "data_size": 63488 00:32:49.185 }, 00:32:49.185 { 00:32:49.185 "name": null, 00:32:49.185 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:49.185 "is_configured": false, 00:32:49.185 "data_offset": 2048, 00:32:49.185 "data_size": 63488 00:32:49.185 }, 00:32:49.185 { 00:32:49.185 "name": null, 00:32:49.185 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:49.185 "is_configured": false, 00:32:49.185 "data_offset": 2048, 00:32:49.185 "data_size": 63488 00:32:49.185 } 00:32:49.185 ] 00:32:49.185 }' 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:49.185 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.763 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:32:49.763 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:49.763 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.763 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.763 [2024-10-28 13:44:03.762326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:49.763 [2024-10-28 13:44:03.762412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.763 [2024-10-28 13:44:03.762445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:49.763 [2024-10-28 13:44:03.762465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.763 [2024-10-28 13:44:03.763036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.764 [2024-10-28 13:44:03.763090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:49.764 [2024-10-28 13:44:03.763206] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:49.764 [2024-10-28 13:44:03.763258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:49.764 pt2 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.764 [2024-10-28 13:44:03.770261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:49.764 "name": "raid_bdev1", 00:32:49.764 "uuid": "b0a570f7-7f68-4db4-8fa9-9e65b76fca95", 00:32:49.764 "strip_size_kb": 64, 00:32:49.764 "state": "configuring", 00:32:49.764 "raid_level": "raid5f", 00:32:49.764 "superblock": true, 00:32:49.764 "num_base_bdevs": 4, 00:32:49.764 "num_base_bdevs_discovered": 1, 00:32:49.764 "num_base_bdevs_operational": 4, 00:32:49.764 "base_bdevs_list": [ 00:32:49.764 { 00:32:49.764 "name": "pt1", 00:32:49.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:49.764 "is_configured": true, 00:32:49.764 "data_offset": 2048, 00:32:49.764 "data_size": 63488 00:32:49.764 }, 00:32:49.764 { 00:32:49.764 "name": null, 00:32:49.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:49.764 "is_configured": false, 00:32:49.764 "data_offset": 0, 00:32:49.764 "data_size": 63488 00:32:49.764 }, 00:32:49.764 { 00:32:49.764 "name": null, 00:32:49.764 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:49.764 "is_configured": false, 00:32:49.764 "data_offset": 2048, 00:32:49.764 "data_size": 63488 00:32:49.764 }, 00:32:49.764 { 00:32:49.764 "name": null, 00:32:49.764 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:49.764 "is_configured": false, 00:32:49.764 "data_offset": 2048, 00:32:49.764 "data_size": 63488 00:32:49.764 } 00:32:49.764 ] 00:32:49.764 }' 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:49.764 13:44:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.347 [2024-10-28 13:44:04.298554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:50.347 [2024-10-28 13:44:04.298676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:50.347 [2024-10-28 13:44:04.298726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:50.347 [2024-10-28 13:44:04.298753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:50.347 [2024-10-28 13:44:04.299527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:50.347 [2024-10-28 13:44:04.299570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:50.347 [2024-10-28 13:44:04.299676] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:50.347 [2024-10-28 13:44:04.299722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:50.347 pt2 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.347 [2024-10-28 13:44:04.306464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:50.347 [2024-10-28 13:44:04.306524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:50.347 [2024-10-28 13:44:04.306553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:32:50.347 [2024-10-28 13:44:04.306567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:50.347 [2024-10-28 13:44:04.307010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:50.347 [2024-10-28 13:44:04.307045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:50.347 [2024-10-28 13:44:04.307122] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:32:50.347 [2024-10-28 13:44:04.307168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:50.347 pt3 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.347 [2024-10-28 13:44:04.318474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:50.347 [2024-10-28 13:44:04.318535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:50.347 [2024-10-28 13:44:04.318565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:32:50.347 [2024-10-28 13:44:04.318580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:50.347 [2024-10-28 13:44:04.319010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:50.347 [2024-10-28 13:44:04.319047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:50.347 [2024-10-28 13:44:04.319127] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:32:50.347 [2024-10-28 13:44:04.319180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:50.347 [2024-10-28 13:44:04.319335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:32:50.347 [2024-10-28 13:44:04.319351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:50.347 [2024-10-28 13:44:04.319662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:50.347 [2024-10-28 13:44:04.320354] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:32:50.347 [2024-10-28 13:44:04.320389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:32:50.347 [2024-10-28 13:44:04.320533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:50.347 pt4 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:50.347 "name": "raid_bdev1", 00:32:50.347 "uuid": "b0a570f7-7f68-4db4-8fa9-9e65b76fca95", 00:32:50.347 "strip_size_kb": 64, 00:32:50.347 "state": "online", 00:32:50.347 "raid_level": "raid5f", 00:32:50.347 "superblock": true, 00:32:50.347 "num_base_bdevs": 4, 00:32:50.347 "num_base_bdevs_discovered": 4, 00:32:50.347 "num_base_bdevs_operational": 4, 00:32:50.347 "base_bdevs_list": [ 00:32:50.347 { 00:32:50.347 "name": "pt1", 00:32:50.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:50.347 "is_configured": true, 00:32:50.347 "data_offset": 2048, 00:32:50.347 "data_size": 63488 00:32:50.347 }, 00:32:50.347 { 00:32:50.347 "name": "pt2", 00:32:50.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:50.347 "is_configured": true, 00:32:50.347 "data_offset": 2048, 00:32:50.347 "data_size": 63488 00:32:50.347 }, 00:32:50.347 { 00:32:50.347 "name": "pt3", 00:32:50.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:50.347 "is_configured": true, 00:32:50.347 "data_offset": 2048, 00:32:50.347 "data_size": 63488 00:32:50.347 }, 00:32:50.347 { 00:32:50.347 "name": "pt4", 00:32:50.347 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:50.347 "is_configured": true, 00:32:50.347 "data_offset": 2048, 00:32:50.347 "data_size": 63488 00:32:50.347 } 00:32:50.347 ] 00:32:50.347 }' 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:50.347 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:50.916 [2024-10-28 13:44:04.827039] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:50.916 "name": "raid_bdev1", 00:32:50.916 "aliases": [ 00:32:50.916 "b0a570f7-7f68-4db4-8fa9-9e65b76fca95" 00:32:50.916 ], 00:32:50.916 "product_name": "Raid Volume", 00:32:50.916 "block_size": 512, 00:32:50.916 "num_blocks": 190464, 00:32:50.916 "uuid": "b0a570f7-7f68-4db4-8fa9-9e65b76fca95", 00:32:50.916 "assigned_rate_limits": { 00:32:50.916 "rw_ios_per_sec": 0, 00:32:50.916 "rw_mbytes_per_sec": 0, 00:32:50.916 "r_mbytes_per_sec": 0, 00:32:50.916 "w_mbytes_per_sec": 0 00:32:50.916 }, 00:32:50.916 "claimed": false, 00:32:50.916 "zoned": false, 00:32:50.916 "supported_io_types": { 00:32:50.916 "read": true, 00:32:50.916 "write": true, 00:32:50.916 "unmap": false, 00:32:50.916 "flush": false, 00:32:50.916 "reset": true, 00:32:50.916 "nvme_admin": false, 00:32:50.916 "nvme_io": false, 00:32:50.916 "nvme_io_md": false, 00:32:50.916 "write_zeroes": true, 00:32:50.916 "zcopy": false, 00:32:50.916 "get_zone_info": false, 00:32:50.916 "zone_management": false, 00:32:50.916 "zone_append": false, 00:32:50.916 "compare": false, 00:32:50.916 "compare_and_write": false, 00:32:50.916 "abort": false, 00:32:50.916 "seek_hole": false, 00:32:50.916 "seek_data": false, 00:32:50.916 "copy": false, 00:32:50.916 "nvme_iov_md": false 00:32:50.916 }, 00:32:50.916 "driver_specific": { 00:32:50.916 "raid": { 00:32:50.916 "uuid": "b0a570f7-7f68-4db4-8fa9-9e65b76fca95", 00:32:50.916 "strip_size_kb": 64, 00:32:50.916 "state": "online", 00:32:50.916 "raid_level": "raid5f", 00:32:50.916 "superblock": true, 00:32:50.916 "num_base_bdevs": 4, 00:32:50.916 "num_base_bdevs_discovered": 4, 00:32:50.916 "num_base_bdevs_operational": 4, 00:32:50.916 "base_bdevs_list": [ 00:32:50.916 { 00:32:50.916 "name": "pt1", 00:32:50.916 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:50.916 "is_configured": true, 00:32:50.916 "data_offset": 2048, 00:32:50.916 "data_size": 63488 00:32:50.916 }, 00:32:50.916 { 00:32:50.916 "name": "pt2", 00:32:50.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:50.916 "is_configured": true, 00:32:50.916 "data_offset": 2048, 00:32:50.916 "data_size": 63488 00:32:50.916 }, 00:32:50.916 { 00:32:50.916 "name": "pt3", 00:32:50.916 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:50.916 "is_configured": true, 00:32:50.916 "data_offset": 2048, 00:32:50.916 "data_size": 63488 00:32:50.916 }, 00:32:50.916 { 00:32:50.916 "name": "pt4", 00:32:50.916 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:50.916 "is_configured": true, 00:32:50.916 "data_offset": 2048, 00:32:50.916 "data_size": 63488 00:32:50.916 } 00:32:50.916 ] 00:32:50.916 } 00:32:50.916 } 00:32:50.916 }' 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:50.916 pt2 00:32:50.916 pt3 00:32:50.916 pt4' 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.916 13:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.916 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.916 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:50.916 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:50.916 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:50.916 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:50.916 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:50.916 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.916 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.916 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:32:51.174 [2024-10-28 13:44:05.211048] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b0a570f7-7f68-4db4-8fa9-9e65b76fca95 '!=' b0a570f7-7f68-4db4-8fa9-9e65b76fca95 ']' 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.174 [2024-10-28 13:44:05.262925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:51.174 "name": "raid_bdev1", 00:32:51.174 "uuid": "b0a570f7-7f68-4db4-8fa9-9e65b76fca95", 00:32:51.174 "strip_size_kb": 64, 00:32:51.174 "state": "online", 00:32:51.174 "raid_level": "raid5f", 00:32:51.174 "superblock": true, 00:32:51.174 "num_base_bdevs": 4, 00:32:51.174 "num_base_bdevs_discovered": 3, 00:32:51.174 "num_base_bdevs_operational": 3, 00:32:51.174 "base_bdevs_list": [ 00:32:51.174 { 00:32:51.174 "name": null, 00:32:51.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:51.174 "is_configured": false, 00:32:51.174 "data_offset": 0, 00:32:51.174 "data_size": 63488 00:32:51.174 }, 00:32:51.174 { 00:32:51.174 "name": "pt2", 00:32:51.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:51.174 "is_configured": true, 00:32:51.174 "data_offset": 2048, 00:32:51.174 "data_size": 63488 00:32:51.174 }, 00:32:51.174 { 00:32:51.174 "name": "pt3", 00:32:51.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:51.174 "is_configured": true, 00:32:51.174 "data_offset": 2048, 00:32:51.174 "data_size": 63488 00:32:51.174 }, 00:32:51.174 { 00:32:51.174 "name": "pt4", 00:32:51.174 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:51.174 "is_configured": true, 00:32:51.174 "data_offset": 2048, 00:32:51.174 "data_size": 63488 00:32:51.174 } 00:32:51.174 ] 00:32:51.174 }' 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:51.174 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.737 [2024-10-28 13:44:05.787010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:51.737 [2024-10-28 13:44:05.787082] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:51.737 [2024-10-28 13:44:05.787227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:51.737 [2024-10-28 13:44:05.787333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:51.737 [2024-10-28 13:44:05.787349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.737 [2024-10-28 13:44:05.882984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:51.737 [2024-10-28 13:44:05.883058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.737 [2024-10-28 13:44:05.883086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:32:51.737 [2024-10-28 13:44:05.883100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.737 [2024-10-28 13:44:05.886196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.737 [2024-10-28 13:44:05.886259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:51.737 [2024-10-28 13:44:05.886370] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:51.737 [2024-10-28 13:44:05.886418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:51.737 pt2 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.737 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.993 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.993 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:51.993 "name": "raid_bdev1", 00:32:51.993 "uuid": "b0a570f7-7f68-4db4-8fa9-9e65b76fca95", 00:32:51.993 "strip_size_kb": 64, 00:32:51.993 "state": "configuring", 00:32:51.993 "raid_level": "raid5f", 00:32:51.993 "superblock": true, 00:32:51.993 "num_base_bdevs": 4, 00:32:51.993 "num_base_bdevs_discovered": 1, 00:32:51.993 "num_base_bdevs_operational": 3, 00:32:51.993 "base_bdevs_list": [ 00:32:51.993 { 00:32:51.993 "name": null, 00:32:51.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:51.993 "is_configured": false, 00:32:51.993 "data_offset": 2048, 00:32:51.993 "data_size": 63488 00:32:51.993 }, 00:32:51.993 { 00:32:51.993 "name": "pt2", 00:32:51.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:51.993 "is_configured": true, 00:32:51.993 "data_offset": 2048, 00:32:51.993 "data_size": 63488 00:32:51.993 }, 00:32:51.993 { 00:32:51.993 "name": null, 00:32:51.993 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:51.993 "is_configured": false, 00:32:51.993 "data_offset": 2048, 00:32:51.993 "data_size": 63488 00:32:51.993 }, 00:32:51.993 { 00:32:51.993 "name": null, 00:32:51.993 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:51.993 "is_configured": false, 00:32:51.993 "data_offset": 2048, 00:32:51.993 "data_size": 63488 00:32:51.993 } 00:32:51.993 ] 00:32:51.993 }' 00:32:51.993 13:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:51.993 13:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.558 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:32:52.558 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:32:52.558 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:52.558 13:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.558 13:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.558 [2024-10-28 13:44:06.419281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:52.558 [2024-10-28 13:44:06.419356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:52.559 [2024-10-28 13:44:06.419399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:32:52.559 [2024-10-28 13:44:06.419426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:52.559 [2024-10-28 13:44:06.419964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:52.559 [2024-10-28 13:44:06.420002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:52.559 [2024-10-28 13:44:06.420106] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:32:52.559 [2024-10-28 13:44:06.420161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:52.559 pt3 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:52.559 "name": "raid_bdev1", 00:32:52.559 "uuid": "b0a570f7-7f68-4db4-8fa9-9e65b76fca95", 00:32:52.559 "strip_size_kb": 64, 00:32:52.559 "state": "configuring", 00:32:52.559 "raid_level": "raid5f", 00:32:52.559 "superblock": true, 00:32:52.559 "num_base_bdevs": 4, 00:32:52.559 "num_base_bdevs_discovered": 2, 00:32:52.559 "num_base_bdevs_operational": 3, 00:32:52.559 "base_bdevs_list": [ 00:32:52.559 { 00:32:52.559 "name": null, 00:32:52.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.559 "is_configured": false, 00:32:52.559 "data_offset": 2048, 00:32:52.559 "data_size": 63488 00:32:52.559 }, 00:32:52.559 { 00:32:52.559 "name": "pt2", 00:32:52.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:52.559 "is_configured": true, 00:32:52.559 "data_offset": 2048, 00:32:52.559 "data_size": 63488 00:32:52.559 }, 00:32:52.559 { 00:32:52.559 "name": "pt3", 00:32:52.559 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:52.559 "is_configured": true, 00:32:52.559 "data_offset": 2048, 00:32:52.559 "data_size": 63488 00:32:52.559 }, 00:32:52.559 { 00:32:52.559 "name": null, 00:32:52.559 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:52.559 "is_configured": false, 00:32:52.559 "data_offset": 2048, 00:32:52.559 "data_size": 63488 00:32:52.559 } 00:32:52.559 ] 00:32:52.559 }' 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:52.559 13:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.817 [2024-10-28 13:44:06.955513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:52.817 [2024-10-28 13:44:06.955593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:52.817 [2024-10-28 13:44:06.955637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:32:52.817 [2024-10-28 13:44:06.955655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:52.817 [2024-10-28 13:44:06.956261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:52.817 [2024-10-28 13:44:06.956299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:52.817 [2024-10-28 13:44:06.956401] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:32:52.817 [2024-10-28 13:44:06.956433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:52.817 [2024-10-28 13:44:06.956578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:52.817 [2024-10-28 13:44:06.956604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:52.817 [2024-10-28 13:44:06.956916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:32:52.817 [2024-10-28 13:44:06.957656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:52.817 [2024-10-28 13:44:06.957691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:52.817 [2024-10-28 13:44:06.958014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:52.817 pt4 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.817 13:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.075 13:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.075 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:53.075 "name": "raid_bdev1", 00:32:53.075 "uuid": "b0a570f7-7f68-4db4-8fa9-9e65b76fca95", 00:32:53.075 "strip_size_kb": 64, 00:32:53.075 "state": "online", 00:32:53.075 "raid_level": "raid5f", 00:32:53.075 "superblock": true, 00:32:53.075 "num_base_bdevs": 4, 00:32:53.075 "num_base_bdevs_discovered": 3, 00:32:53.075 "num_base_bdevs_operational": 3, 00:32:53.075 "base_bdevs_list": [ 00:32:53.075 { 00:32:53.075 "name": null, 00:32:53.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:53.075 "is_configured": false, 00:32:53.075 "data_offset": 2048, 00:32:53.075 "data_size": 63488 00:32:53.075 }, 00:32:53.075 { 00:32:53.075 "name": "pt2", 00:32:53.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:53.075 "is_configured": true, 00:32:53.075 "data_offset": 2048, 00:32:53.075 "data_size": 63488 00:32:53.075 }, 00:32:53.075 { 00:32:53.075 "name": "pt3", 00:32:53.075 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:53.075 "is_configured": true, 00:32:53.075 "data_offset": 2048, 00:32:53.075 "data_size": 63488 00:32:53.075 }, 00:32:53.075 { 00:32:53.075 "name": "pt4", 00:32:53.075 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:53.075 "is_configured": true, 00:32:53.075 "data_offset": 2048, 00:32:53.075 "data_size": 63488 00:32:53.075 } 00:32:53.075 ] 00:32:53.075 }' 00:32:53.075 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:53.075 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.334 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:53.334 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.334 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.593 [2024-10-28 13:44:07.492264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:53.593 [2024-10-28 13:44:07.492434] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:53.593 [2024-10-28 13:44:07.492557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:53.593 [2024-10-28 13:44:07.492682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:53.593 [2024-10-28 13:44:07.492709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.593 [2024-10-28 13:44:07.564297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:53.593 [2024-10-28 13:44:07.564376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:53.593 [2024-10-28 13:44:07.564405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:32:53.593 [2024-10-28 13:44:07.564423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:53.593 [2024-10-28 13:44:07.567302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:53.593 [2024-10-28 13:44:07.567354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:53.593 [2024-10-28 13:44:07.567456] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:53.593 [2024-10-28 13:44:07.567516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:53.593 [2024-10-28 13:44:07.567663] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:53.593 [2024-10-28 13:44:07.567687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:53.593 [2024-10-28 13:44:07.567710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:32:53.593 [2024-10-28 13:44:07.567755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:53.593 [2024-10-28 13:44:07.567889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:53.593 pt1 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.593 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:53.593 "name": "raid_bdev1", 00:32:53.593 "uuid": "b0a570f7-7f68-4db4-8fa9-9e65b76fca95", 00:32:53.593 "strip_size_kb": 64, 00:32:53.593 "state": "configuring", 00:32:53.593 "raid_level": "raid5f", 00:32:53.593 "superblock": true, 00:32:53.593 "num_base_bdevs": 4, 00:32:53.593 "num_base_bdevs_discovered": 2, 00:32:53.593 "num_base_bdevs_operational": 3, 00:32:53.593 "base_bdevs_list": [ 00:32:53.593 { 00:32:53.593 "name": null, 00:32:53.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:53.593 "is_configured": false, 00:32:53.593 "data_offset": 2048, 00:32:53.594 "data_size": 63488 00:32:53.594 }, 00:32:53.594 { 00:32:53.594 "name": "pt2", 00:32:53.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:53.594 "is_configured": true, 00:32:53.594 "data_offset": 2048, 00:32:53.594 "data_size": 63488 00:32:53.594 }, 00:32:53.594 { 00:32:53.594 "name": "pt3", 00:32:53.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:53.594 "is_configured": true, 00:32:53.594 "data_offset": 2048, 00:32:53.594 "data_size": 63488 00:32:53.594 }, 00:32:53.594 { 00:32:53.594 "name": null, 00:32:53.594 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:53.594 "is_configured": false, 00:32:53.594 "data_offset": 2048, 00:32:53.594 "data_size": 63488 00:32:53.594 } 00:32:53.594 ] 00:32:53.594 }' 00:32:53.594 13:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:53.594 13:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:54.160 [2024-10-28 13:44:08.140521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:54.160 [2024-10-28 13:44:08.140842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:54.160 [2024-10-28 13:44:08.140910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:32:54.160 [2024-10-28 13:44:08.140943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:54.160 [2024-10-28 13:44:08.141711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:54.160 [2024-10-28 13:44:08.141769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:54.160 [2024-10-28 13:44:08.141916] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:32:54.160 [2024-10-28 13:44:08.141970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:54.160 [2024-10-28 13:44:08.142219] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:32:54.160 [2024-10-28 13:44:08.142255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:54.160 [2024-10-28 13:44:08.142698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:32:54.160 [2024-10-28 13:44:08.143737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:32:54.160 [2024-10-28 13:44:08.143773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:32:54.160 [2024-10-28 13:44:08.144224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:54.160 pt4 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:54.160 "name": "raid_bdev1", 00:32:54.160 "uuid": "b0a570f7-7f68-4db4-8fa9-9e65b76fca95", 00:32:54.160 "strip_size_kb": 64, 00:32:54.160 "state": "online", 00:32:54.160 "raid_level": "raid5f", 00:32:54.160 "superblock": true, 00:32:54.160 "num_base_bdevs": 4, 00:32:54.160 "num_base_bdevs_discovered": 3, 00:32:54.160 "num_base_bdevs_operational": 3, 00:32:54.160 "base_bdevs_list": [ 00:32:54.160 { 00:32:54.160 "name": null, 00:32:54.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.160 "is_configured": false, 00:32:54.160 "data_offset": 2048, 00:32:54.160 "data_size": 63488 00:32:54.160 }, 00:32:54.160 { 00:32:54.160 "name": "pt2", 00:32:54.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:54.160 "is_configured": true, 00:32:54.160 "data_offset": 2048, 00:32:54.160 "data_size": 63488 00:32:54.160 }, 00:32:54.160 { 00:32:54.160 "name": "pt3", 00:32:54.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:54.160 "is_configured": true, 00:32:54.160 "data_offset": 2048, 00:32:54.160 "data_size": 63488 00:32:54.160 }, 00:32:54.160 { 00:32:54.160 "name": "pt4", 00:32:54.160 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:54.160 "is_configured": true, 00:32:54.160 "data_offset": 2048, 00:32:54.160 "data_size": 63488 00:32:54.160 } 00:32:54.160 ] 00:32:54.160 }' 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:54.160 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:54.729 [2024-10-28 13:44:08.741104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b0a570f7-7f68-4db4-8fa9-9e65b76fca95 '!=' b0a570f7-7f68-4db4-8fa9-9e65b76fca95 ']' 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 96841 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 96841 ']' 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 96841 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96841 00:32:54.729 killing process with pid 96841 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96841' 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 96841 00:32:54.729 [2024-10-28 13:44:08.825396] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:54.729 13:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 96841 00:32:54.729 [2024-10-28 13:44:08.825514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:54.729 [2024-10-28 13:44:08.825615] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:54.729 [2024-10-28 13:44:08.825636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:32:54.729 [2024-10-28 13:44:08.874262] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:55.052 ************************************ 00:32:55.052 END TEST raid5f_superblock_test 00:32:55.052 ************************************ 00:32:55.052 13:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:32:55.052 00:32:55.052 real 0m8.254s 00:32:55.052 user 0m14.384s 00:32:55.052 sys 0m1.315s 00:32:55.052 13:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:55.052 13:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.052 13:44:09 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:32:55.052 13:44:09 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:32:55.052 13:44:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:32:55.052 13:44:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:55.052 13:44:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:55.052 ************************************ 00:32:55.052 START TEST raid5f_rebuild_test 00:32:55.052 ************************************ 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=97326 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 97326 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 97326 ']' 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:55.052 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.310 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:55.310 Zero copy mechanism will not be used. 00:32:55.310 [2024-10-28 13:44:09.295393] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:32:55.310 [2024-10-28 13:44:09.295603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97326 ] 00:32:55.310 [2024-10-28 13:44:09.452841] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:55.568 [2024-10-28 13:44:09.483813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.568 [2024-10-28 13:44:09.525695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.568 [2024-10-28 13:44:09.587804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:55.568 [2024-10-28 13:44:09.587903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.568 BaseBdev1_malloc 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.568 [2024-10-28 13:44:09.680848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:55.568 [2024-10-28 13:44:09.680940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:55.568 [2024-10-28 13:44:09.680982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:55.568 [2024-10-28 13:44:09.681005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:55.568 [2024-10-28 13:44:09.684021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:55.568 [2024-10-28 13:44:09.684074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:55.568 BaseBdev1 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.568 BaseBdev2_malloc 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.568 [2024-10-28 13:44:09.709681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:55.568 [2024-10-28 13:44:09.709887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:55.568 [2024-10-28 13:44:09.709960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:55.568 [2024-10-28 13:44:09.710074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:55.568 [2024-10-28 13:44:09.713212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:55.568 BaseBdev2 00:32:55.568 [2024-10-28 13:44:09.713431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.568 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.826 BaseBdev3_malloc 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.826 [2024-10-28 13:44:09.734602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:55.826 [2024-10-28 13:44:09.734819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:55.826 [2024-10-28 13:44:09.734895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:55.826 [2024-10-28 13:44:09.735013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:55.826 [2024-10-28 13:44:09.738037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:55.826 [2024-10-28 13:44:09.738257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:55.826 BaseBdev3 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.826 BaseBdev4_malloc 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.826 [2024-10-28 13:44:09.775598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:32:55.826 [2024-10-28 13:44:09.775673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:55.826 [2024-10-28 13:44:09.775712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:55.826 [2024-10-28 13:44:09.775729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:55.826 [2024-10-28 13:44:09.778563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:55.826 [2024-10-28 13:44:09.778616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:55.826 BaseBdev4 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.826 spare_malloc 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.826 spare_delay 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.826 [2024-10-28 13:44:09.812130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:55.826 [2024-10-28 13:44:09.812258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:55.826 [2024-10-28 13:44:09.812289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:55.826 [2024-10-28 13:44:09.812307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:55.826 [2024-10-28 13:44:09.815266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:55.826 [2024-10-28 13:44:09.815318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:55.826 spare 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.826 [2024-10-28 13:44:09.820278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:55.826 [2024-10-28 13:44:09.823008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:55.826 [2024-10-28 13:44:09.823094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:55.826 [2024-10-28 13:44:09.823219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:55.826 [2024-10-28 13:44:09.823362] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:32:55.826 [2024-10-28 13:44:09.823384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:55.826 [2024-10-28 13:44:09.823723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:55.826 [2024-10-28 13:44:09.824362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:32:55.826 [2024-10-28 13:44:09.824383] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:32:55.826 [2024-10-28 13:44:09.824609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.826 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:55.826 "name": "raid_bdev1", 00:32:55.826 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:32:55.826 "strip_size_kb": 64, 00:32:55.826 "state": "online", 00:32:55.826 "raid_level": "raid5f", 00:32:55.826 "superblock": false, 00:32:55.826 "num_base_bdevs": 4, 00:32:55.826 "num_base_bdevs_discovered": 4, 00:32:55.826 "num_base_bdevs_operational": 4, 00:32:55.826 "base_bdevs_list": [ 00:32:55.826 { 00:32:55.826 "name": "BaseBdev1", 00:32:55.826 "uuid": "bc01a1c7-98bf-5ffd-93a4-555b1da3aadc", 00:32:55.826 "is_configured": true, 00:32:55.826 "data_offset": 0, 00:32:55.826 "data_size": 65536 00:32:55.826 }, 00:32:55.826 { 00:32:55.826 "name": "BaseBdev2", 00:32:55.826 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:32:55.826 "is_configured": true, 00:32:55.826 "data_offset": 0, 00:32:55.827 "data_size": 65536 00:32:55.827 }, 00:32:55.827 { 00:32:55.827 "name": "BaseBdev3", 00:32:55.827 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:32:55.827 "is_configured": true, 00:32:55.827 "data_offset": 0, 00:32:55.827 "data_size": 65536 00:32:55.827 }, 00:32:55.827 { 00:32:55.827 "name": "BaseBdev4", 00:32:55.827 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:32:55.827 "is_configured": true, 00:32:55.827 "data_offset": 0, 00:32:55.827 "data_size": 65536 00:32:55.827 } 00:32:55.827 ] 00:32:55.827 }' 00:32:55.827 13:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:55.827 13:44:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.393 [2024-10-28 13:44:10.313067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:56.393 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:56.651 [2024-10-28 13:44:10.693008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:32:56.651 /dev/nbd0 00:32:56.651 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:56.651 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:56.651 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:56.651 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:56.652 1+0 records in 00:32:56.652 1+0 records out 00:32:56.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307009 s, 13.3 MB/s 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:32:56.652 13:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:32:57.236 512+0 records in 00:32:57.236 512+0 records out 00:32:57.236 100663296 bytes (101 MB, 96 MiB) copied, 0.638155 s, 158 MB/s 00:32:57.494 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:57.494 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:57.494 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:57.494 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:57.494 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:32:57.494 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:57.494 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:57.752 [2024-10-28 13:44:11.693006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:57.752 [2024-10-28 13:44:11.707156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:57.752 "name": "raid_bdev1", 00:32:57.752 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:32:57.752 "strip_size_kb": 64, 00:32:57.752 "state": "online", 00:32:57.752 "raid_level": "raid5f", 00:32:57.752 "superblock": false, 00:32:57.752 "num_base_bdevs": 4, 00:32:57.752 "num_base_bdevs_discovered": 3, 00:32:57.752 "num_base_bdevs_operational": 3, 00:32:57.752 "base_bdevs_list": [ 00:32:57.752 { 00:32:57.752 "name": null, 00:32:57.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:57.752 "is_configured": false, 00:32:57.752 "data_offset": 0, 00:32:57.752 "data_size": 65536 00:32:57.752 }, 00:32:57.752 { 00:32:57.752 "name": "BaseBdev2", 00:32:57.752 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:32:57.752 "is_configured": true, 00:32:57.752 "data_offset": 0, 00:32:57.752 "data_size": 65536 00:32:57.752 }, 00:32:57.752 { 00:32:57.752 "name": "BaseBdev3", 00:32:57.752 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:32:57.752 "is_configured": true, 00:32:57.752 "data_offset": 0, 00:32:57.752 "data_size": 65536 00:32:57.752 }, 00:32:57.752 { 00:32:57.752 "name": "BaseBdev4", 00:32:57.752 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:32:57.752 "is_configured": true, 00:32:57.752 "data_offset": 0, 00:32:57.752 "data_size": 65536 00:32:57.752 } 00:32:57.752 ] 00:32:57.752 }' 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:57.752 13:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:58.317 13:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:58.317 13:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.317 13:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:58.317 [2024-10-28 13:44:12.199340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:58.317 [2024-10-28 13:44:12.205408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bb60 00:32:58.317 13:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.317 13:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:32:58.317 [2024-10-28 13:44:12.208508] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:59.253 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:59.253 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:59.253 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:59.254 "name": "raid_bdev1", 00:32:59.254 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:32:59.254 "strip_size_kb": 64, 00:32:59.254 "state": "online", 00:32:59.254 "raid_level": "raid5f", 00:32:59.254 "superblock": false, 00:32:59.254 "num_base_bdevs": 4, 00:32:59.254 "num_base_bdevs_discovered": 4, 00:32:59.254 "num_base_bdevs_operational": 4, 00:32:59.254 "process": { 00:32:59.254 "type": "rebuild", 00:32:59.254 "target": "spare", 00:32:59.254 "progress": { 00:32:59.254 "blocks": 19200, 00:32:59.254 "percent": 9 00:32:59.254 } 00:32:59.254 }, 00:32:59.254 "base_bdevs_list": [ 00:32:59.254 { 00:32:59.254 "name": "spare", 00:32:59.254 "uuid": "18a13a3e-c92f-5348-835d-1d9afbbecb16", 00:32:59.254 "is_configured": true, 00:32:59.254 "data_offset": 0, 00:32:59.254 "data_size": 65536 00:32:59.254 }, 00:32:59.254 { 00:32:59.254 "name": "BaseBdev2", 00:32:59.254 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:32:59.254 "is_configured": true, 00:32:59.254 "data_offset": 0, 00:32:59.254 "data_size": 65536 00:32:59.254 }, 00:32:59.254 { 00:32:59.254 "name": "BaseBdev3", 00:32:59.254 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:32:59.254 "is_configured": true, 00:32:59.254 "data_offset": 0, 00:32:59.254 "data_size": 65536 00:32:59.254 }, 00:32:59.254 { 00:32:59.254 "name": "BaseBdev4", 00:32:59.254 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:32:59.254 "is_configured": true, 00:32:59.254 "data_offset": 0, 00:32:59.254 "data_size": 65536 00:32:59.254 } 00:32:59.254 ] 00:32:59.254 }' 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.254 13:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.254 [2024-10-28 13:44:13.373993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:59.512 [2024-10-28 13:44:13.419646] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:59.512 [2024-10-28 13:44:13.419935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:59.512 [2024-10-28 13:44:13.419968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:59.512 [2024-10-28 13:44:13.420001] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.512 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:59.512 "name": "raid_bdev1", 00:32:59.512 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:32:59.513 "strip_size_kb": 64, 00:32:59.513 "state": "online", 00:32:59.513 "raid_level": "raid5f", 00:32:59.513 "superblock": false, 00:32:59.513 "num_base_bdevs": 4, 00:32:59.513 "num_base_bdevs_discovered": 3, 00:32:59.513 "num_base_bdevs_operational": 3, 00:32:59.513 "base_bdevs_list": [ 00:32:59.513 { 00:32:59.513 "name": null, 00:32:59.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.513 "is_configured": false, 00:32:59.513 "data_offset": 0, 00:32:59.513 "data_size": 65536 00:32:59.513 }, 00:32:59.513 { 00:32:59.513 "name": "BaseBdev2", 00:32:59.513 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:32:59.513 "is_configured": true, 00:32:59.513 "data_offset": 0, 00:32:59.513 "data_size": 65536 00:32:59.513 }, 00:32:59.513 { 00:32:59.513 "name": "BaseBdev3", 00:32:59.513 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:32:59.513 "is_configured": true, 00:32:59.513 "data_offset": 0, 00:32:59.513 "data_size": 65536 00:32:59.513 }, 00:32:59.513 { 00:32:59.513 "name": "BaseBdev4", 00:32:59.513 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:32:59.513 "is_configured": true, 00:32:59.513 "data_offset": 0, 00:32:59.513 "data_size": 65536 00:32:59.513 } 00:32:59.513 ] 00:32:59.513 }' 00:32:59.513 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:59.513 13:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.079 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:00.079 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:00.079 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:00.079 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:00.079 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:00.079 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.079 13:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.079 13:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.079 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.079 13:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.079 13:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:00.079 "name": "raid_bdev1", 00:33:00.079 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:33:00.079 "strip_size_kb": 64, 00:33:00.079 "state": "online", 00:33:00.079 "raid_level": "raid5f", 00:33:00.079 "superblock": false, 00:33:00.079 "num_base_bdevs": 4, 00:33:00.079 "num_base_bdevs_discovered": 3, 00:33:00.079 "num_base_bdevs_operational": 3, 00:33:00.079 "base_bdevs_list": [ 00:33:00.079 { 00:33:00.079 "name": null, 00:33:00.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.079 "is_configured": false, 00:33:00.079 "data_offset": 0, 00:33:00.079 "data_size": 65536 00:33:00.079 }, 00:33:00.079 { 00:33:00.079 "name": "BaseBdev2", 00:33:00.079 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:33:00.079 "is_configured": true, 00:33:00.079 "data_offset": 0, 00:33:00.079 "data_size": 65536 00:33:00.079 }, 00:33:00.079 { 00:33:00.079 "name": "BaseBdev3", 00:33:00.079 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:33:00.079 "is_configured": true, 00:33:00.079 "data_offset": 0, 00:33:00.079 "data_size": 65536 00:33:00.079 }, 00:33:00.079 { 00:33:00.079 "name": "BaseBdev4", 00:33:00.079 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:33:00.079 "is_configured": true, 00:33:00.079 "data_offset": 0, 00:33:00.079 "data_size": 65536 00:33:00.079 } 00:33:00.079 ] 00:33:00.079 }' 00:33:00.079 13:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:00.079 13:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:00.079 13:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:00.079 13:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:00.079 13:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:00.079 13:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.079 13:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.079 [2024-10-28 13:44:14.099812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:00.079 [2024-10-28 13:44:14.106193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bc30 00:33:00.079 13:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.079 13:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:33:00.079 [2024-10-28 13:44:14.109479] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:01.015 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:01.015 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:01.015 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:01.015 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:01.015 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:01.015 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:01.015 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:01.015 13:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.015 13:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.015 13:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.015 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:01.015 "name": "raid_bdev1", 00:33:01.015 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:33:01.015 "strip_size_kb": 64, 00:33:01.015 "state": "online", 00:33:01.015 "raid_level": "raid5f", 00:33:01.015 "superblock": false, 00:33:01.015 "num_base_bdevs": 4, 00:33:01.015 "num_base_bdevs_discovered": 4, 00:33:01.015 "num_base_bdevs_operational": 4, 00:33:01.015 "process": { 00:33:01.015 "type": "rebuild", 00:33:01.015 "target": "spare", 00:33:01.015 "progress": { 00:33:01.015 "blocks": 17280, 00:33:01.015 "percent": 8 00:33:01.015 } 00:33:01.015 }, 00:33:01.015 "base_bdevs_list": [ 00:33:01.015 { 00:33:01.015 "name": "spare", 00:33:01.015 "uuid": "18a13a3e-c92f-5348-835d-1d9afbbecb16", 00:33:01.015 "is_configured": true, 00:33:01.015 "data_offset": 0, 00:33:01.015 "data_size": 65536 00:33:01.015 }, 00:33:01.015 { 00:33:01.015 "name": "BaseBdev2", 00:33:01.015 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:33:01.015 "is_configured": true, 00:33:01.015 "data_offset": 0, 00:33:01.015 "data_size": 65536 00:33:01.015 }, 00:33:01.015 { 00:33:01.015 "name": "BaseBdev3", 00:33:01.015 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:33:01.015 "is_configured": true, 00:33:01.015 "data_offset": 0, 00:33:01.015 "data_size": 65536 00:33:01.015 }, 00:33:01.015 { 00:33:01.015 "name": "BaseBdev4", 00:33:01.015 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:33:01.015 "is_configured": true, 00:33:01.015 "data_offset": 0, 00:33:01.015 "data_size": 65536 00:33:01.015 } 00:33:01.015 ] 00:33:01.015 }' 00:33:01.015 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:01.273 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:01.273 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:01.273 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:01.273 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:33:01.273 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:33:01.273 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:33:01.273 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=589 00:33:01.273 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:01.273 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:01.273 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:01.274 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:01.274 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:01.274 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:01.274 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:01.274 13:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.274 13:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.274 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:01.274 13:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.274 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:01.274 "name": "raid_bdev1", 00:33:01.274 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:33:01.274 "strip_size_kb": 64, 00:33:01.274 "state": "online", 00:33:01.274 "raid_level": "raid5f", 00:33:01.274 "superblock": false, 00:33:01.274 "num_base_bdevs": 4, 00:33:01.274 "num_base_bdevs_discovered": 4, 00:33:01.274 "num_base_bdevs_operational": 4, 00:33:01.274 "process": { 00:33:01.274 "type": "rebuild", 00:33:01.274 "target": "spare", 00:33:01.274 "progress": { 00:33:01.274 "blocks": 21120, 00:33:01.274 "percent": 10 00:33:01.274 } 00:33:01.274 }, 00:33:01.274 "base_bdevs_list": [ 00:33:01.274 { 00:33:01.274 "name": "spare", 00:33:01.274 "uuid": "18a13a3e-c92f-5348-835d-1d9afbbecb16", 00:33:01.274 "is_configured": true, 00:33:01.274 "data_offset": 0, 00:33:01.274 "data_size": 65536 00:33:01.274 }, 00:33:01.274 { 00:33:01.274 "name": "BaseBdev2", 00:33:01.274 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:33:01.274 "is_configured": true, 00:33:01.274 "data_offset": 0, 00:33:01.274 "data_size": 65536 00:33:01.274 }, 00:33:01.274 { 00:33:01.274 "name": "BaseBdev3", 00:33:01.274 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:33:01.274 "is_configured": true, 00:33:01.274 "data_offset": 0, 00:33:01.274 "data_size": 65536 00:33:01.274 }, 00:33:01.274 { 00:33:01.274 "name": "BaseBdev4", 00:33:01.274 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:33:01.274 "is_configured": true, 00:33:01.274 "data_offset": 0, 00:33:01.274 "data_size": 65536 00:33:01.274 } 00:33:01.274 ] 00:33:01.274 }' 00:33:01.274 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:01.274 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:01.274 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:01.274 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:01.274 13:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:02.672 "name": "raid_bdev1", 00:33:02.672 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:33:02.672 "strip_size_kb": 64, 00:33:02.672 "state": "online", 00:33:02.672 "raid_level": "raid5f", 00:33:02.672 "superblock": false, 00:33:02.672 "num_base_bdevs": 4, 00:33:02.672 "num_base_bdevs_discovered": 4, 00:33:02.672 "num_base_bdevs_operational": 4, 00:33:02.672 "process": { 00:33:02.672 "type": "rebuild", 00:33:02.672 "target": "spare", 00:33:02.672 "progress": { 00:33:02.672 "blocks": 44160, 00:33:02.672 "percent": 22 00:33:02.672 } 00:33:02.672 }, 00:33:02.672 "base_bdevs_list": [ 00:33:02.672 { 00:33:02.672 "name": "spare", 00:33:02.672 "uuid": "18a13a3e-c92f-5348-835d-1d9afbbecb16", 00:33:02.672 "is_configured": true, 00:33:02.672 "data_offset": 0, 00:33:02.672 "data_size": 65536 00:33:02.672 }, 00:33:02.672 { 00:33:02.672 "name": "BaseBdev2", 00:33:02.672 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:33:02.672 "is_configured": true, 00:33:02.672 "data_offset": 0, 00:33:02.672 "data_size": 65536 00:33:02.672 }, 00:33:02.672 { 00:33:02.672 "name": "BaseBdev3", 00:33:02.672 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:33:02.672 "is_configured": true, 00:33:02.672 "data_offset": 0, 00:33:02.672 "data_size": 65536 00:33:02.672 }, 00:33:02.672 { 00:33:02.672 "name": "BaseBdev4", 00:33:02.672 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:33:02.672 "is_configured": true, 00:33:02.672 "data_offset": 0, 00:33:02.672 "data_size": 65536 00:33:02.672 } 00:33:02.672 ] 00:33:02.672 }' 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:02.672 13:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:03.607 "name": "raid_bdev1", 00:33:03.607 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:33:03.607 "strip_size_kb": 64, 00:33:03.607 "state": "online", 00:33:03.607 "raid_level": "raid5f", 00:33:03.607 "superblock": false, 00:33:03.607 "num_base_bdevs": 4, 00:33:03.607 "num_base_bdevs_discovered": 4, 00:33:03.607 "num_base_bdevs_operational": 4, 00:33:03.607 "process": { 00:33:03.607 "type": "rebuild", 00:33:03.607 "target": "spare", 00:33:03.607 "progress": { 00:33:03.607 "blocks": 65280, 00:33:03.607 "percent": 33 00:33:03.607 } 00:33:03.607 }, 00:33:03.607 "base_bdevs_list": [ 00:33:03.607 { 00:33:03.607 "name": "spare", 00:33:03.607 "uuid": "18a13a3e-c92f-5348-835d-1d9afbbecb16", 00:33:03.607 "is_configured": true, 00:33:03.607 "data_offset": 0, 00:33:03.607 "data_size": 65536 00:33:03.607 }, 00:33:03.607 { 00:33:03.607 "name": "BaseBdev2", 00:33:03.607 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:33:03.607 "is_configured": true, 00:33:03.607 "data_offset": 0, 00:33:03.607 "data_size": 65536 00:33:03.607 }, 00:33:03.607 { 00:33:03.607 "name": "BaseBdev3", 00:33:03.607 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:33:03.607 "is_configured": true, 00:33:03.607 "data_offset": 0, 00:33:03.607 "data_size": 65536 00:33:03.607 }, 00:33:03.607 { 00:33:03.607 "name": "BaseBdev4", 00:33:03.607 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:33:03.607 "is_configured": true, 00:33:03.607 "data_offset": 0, 00:33:03.607 "data_size": 65536 00:33:03.607 } 00:33:03.607 ] 00:33:03.607 }' 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:03.607 13:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:04.983 "name": "raid_bdev1", 00:33:04.983 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:33:04.983 "strip_size_kb": 64, 00:33:04.983 "state": "online", 00:33:04.983 "raid_level": "raid5f", 00:33:04.983 "superblock": false, 00:33:04.983 "num_base_bdevs": 4, 00:33:04.983 "num_base_bdevs_discovered": 4, 00:33:04.983 "num_base_bdevs_operational": 4, 00:33:04.983 "process": { 00:33:04.983 "type": "rebuild", 00:33:04.983 "target": "spare", 00:33:04.983 "progress": { 00:33:04.983 "blocks": 88320, 00:33:04.983 "percent": 44 00:33:04.983 } 00:33:04.983 }, 00:33:04.983 "base_bdevs_list": [ 00:33:04.983 { 00:33:04.983 "name": "spare", 00:33:04.983 "uuid": "18a13a3e-c92f-5348-835d-1d9afbbecb16", 00:33:04.983 "is_configured": true, 00:33:04.983 "data_offset": 0, 00:33:04.983 "data_size": 65536 00:33:04.983 }, 00:33:04.983 { 00:33:04.983 "name": "BaseBdev2", 00:33:04.983 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:33:04.983 "is_configured": true, 00:33:04.983 "data_offset": 0, 00:33:04.983 "data_size": 65536 00:33:04.983 }, 00:33:04.983 { 00:33:04.983 "name": "BaseBdev3", 00:33:04.983 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:33:04.983 "is_configured": true, 00:33:04.983 "data_offset": 0, 00:33:04.983 "data_size": 65536 00:33:04.983 }, 00:33:04.983 { 00:33:04.983 "name": "BaseBdev4", 00:33:04.983 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:33:04.983 "is_configured": true, 00:33:04.983 "data_offset": 0, 00:33:04.983 "data_size": 65536 00:33:04.983 } 00:33:04.983 ] 00:33:04.983 }' 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:04.983 13:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:05.918 13:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:05.918 13:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:05.918 13:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:05.918 13:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:05.918 13:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:05.918 13:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:05.918 13:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.918 13:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.918 13:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:05.918 13:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.918 13:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.918 13:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:05.918 "name": "raid_bdev1", 00:33:05.918 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:33:05.918 "strip_size_kb": 64, 00:33:05.918 "state": "online", 00:33:05.918 "raid_level": "raid5f", 00:33:05.918 "superblock": false, 00:33:05.918 "num_base_bdevs": 4, 00:33:05.918 "num_base_bdevs_discovered": 4, 00:33:05.918 "num_base_bdevs_operational": 4, 00:33:05.918 "process": { 00:33:05.918 "type": "rebuild", 00:33:05.918 "target": "spare", 00:33:05.918 "progress": { 00:33:05.918 "blocks": 109440, 00:33:05.918 "percent": 55 00:33:05.918 } 00:33:05.918 }, 00:33:05.918 "base_bdevs_list": [ 00:33:05.918 { 00:33:05.918 "name": "spare", 00:33:05.918 "uuid": "18a13a3e-c92f-5348-835d-1d9afbbecb16", 00:33:05.918 "is_configured": true, 00:33:05.918 "data_offset": 0, 00:33:05.918 "data_size": 65536 00:33:05.918 }, 00:33:05.918 { 00:33:05.918 "name": "BaseBdev2", 00:33:05.918 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:33:05.918 "is_configured": true, 00:33:05.918 "data_offset": 0, 00:33:05.918 "data_size": 65536 00:33:05.918 }, 00:33:05.918 { 00:33:05.918 "name": "BaseBdev3", 00:33:05.918 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:33:05.918 "is_configured": true, 00:33:05.918 "data_offset": 0, 00:33:05.918 "data_size": 65536 00:33:05.918 }, 00:33:05.918 { 00:33:05.918 "name": "BaseBdev4", 00:33:05.918 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:33:05.918 "is_configured": true, 00:33:05.918 "data_offset": 0, 00:33:05.918 "data_size": 65536 00:33:05.918 } 00:33:05.918 ] 00:33:05.918 }' 00:33:05.918 13:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:05.918 13:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:05.918 13:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:06.176 13:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:06.176 13:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:07.112 "name": "raid_bdev1", 00:33:07.112 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:33:07.112 "strip_size_kb": 64, 00:33:07.112 "state": "online", 00:33:07.112 "raid_level": "raid5f", 00:33:07.112 "superblock": false, 00:33:07.112 "num_base_bdevs": 4, 00:33:07.112 "num_base_bdevs_discovered": 4, 00:33:07.112 "num_base_bdevs_operational": 4, 00:33:07.112 "process": { 00:33:07.112 "type": "rebuild", 00:33:07.112 "target": "spare", 00:33:07.112 "progress": { 00:33:07.112 "blocks": 132480, 00:33:07.112 "percent": 67 00:33:07.112 } 00:33:07.112 }, 00:33:07.112 "base_bdevs_list": [ 00:33:07.112 { 00:33:07.112 "name": "spare", 00:33:07.112 "uuid": "18a13a3e-c92f-5348-835d-1d9afbbecb16", 00:33:07.112 "is_configured": true, 00:33:07.112 "data_offset": 0, 00:33:07.112 "data_size": 65536 00:33:07.112 }, 00:33:07.112 { 00:33:07.112 "name": "BaseBdev2", 00:33:07.112 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:33:07.112 "is_configured": true, 00:33:07.112 "data_offset": 0, 00:33:07.112 "data_size": 65536 00:33:07.112 }, 00:33:07.112 { 00:33:07.112 "name": "BaseBdev3", 00:33:07.112 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:33:07.112 "is_configured": true, 00:33:07.112 "data_offset": 0, 00:33:07.112 "data_size": 65536 00:33:07.112 }, 00:33:07.112 { 00:33:07.112 "name": "BaseBdev4", 00:33:07.112 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:33:07.112 "is_configured": true, 00:33:07.112 "data_offset": 0, 00:33:07.112 "data_size": 65536 00:33:07.112 } 00:33:07.112 ] 00:33:07.112 }' 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:07.112 13:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:08.508 "name": "raid_bdev1", 00:33:08.508 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:33:08.508 "strip_size_kb": 64, 00:33:08.508 "state": "online", 00:33:08.508 "raid_level": "raid5f", 00:33:08.508 "superblock": false, 00:33:08.508 "num_base_bdevs": 4, 00:33:08.508 "num_base_bdevs_discovered": 4, 00:33:08.508 "num_base_bdevs_operational": 4, 00:33:08.508 "process": { 00:33:08.508 "type": "rebuild", 00:33:08.508 "target": "spare", 00:33:08.508 "progress": { 00:33:08.508 "blocks": 153600, 00:33:08.508 "percent": 78 00:33:08.508 } 00:33:08.508 }, 00:33:08.508 "base_bdevs_list": [ 00:33:08.508 { 00:33:08.508 "name": "spare", 00:33:08.508 "uuid": "18a13a3e-c92f-5348-835d-1d9afbbecb16", 00:33:08.508 "is_configured": true, 00:33:08.508 "data_offset": 0, 00:33:08.508 "data_size": 65536 00:33:08.508 }, 00:33:08.508 { 00:33:08.508 "name": "BaseBdev2", 00:33:08.508 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:33:08.508 "is_configured": true, 00:33:08.508 "data_offset": 0, 00:33:08.508 "data_size": 65536 00:33:08.508 }, 00:33:08.508 { 00:33:08.508 "name": "BaseBdev3", 00:33:08.508 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:33:08.508 "is_configured": true, 00:33:08.508 "data_offset": 0, 00:33:08.508 "data_size": 65536 00:33:08.508 }, 00:33:08.508 { 00:33:08.508 "name": "BaseBdev4", 00:33:08.508 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:33:08.508 "is_configured": true, 00:33:08.508 "data_offset": 0, 00:33:08.508 "data_size": 65536 00:33:08.508 } 00:33:08.508 ] 00:33:08.508 }' 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:08.508 13:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:09.443 "name": "raid_bdev1", 00:33:09.443 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:33:09.443 "strip_size_kb": 64, 00:33:09.443 "state": "online", 00:33:09.443 "raid_level": "raid5f", 00:33:09.443 "superblock": false, 00:33:09.443 "num_base_bdevs": 4, 00:33:09.443 "num_base_bdevs_discovered": 4, 00:33:09.443 "num_base_bdevs_operational": 4, 00:33:09.443 "process": { 00:33:09.443 "type": "rebuild", 00:33:09.443 "target": "spare", 00:33:09.443 "progress": { 00:33:09.443 "blocks": 176640, 00:33:09.443 "percent": 89 00:33:09.443 } 00:33:09.443 }, 00:33:09.443 "base_bdevs_list": [ 00:33:09.443 { 00:33:09.443 "name": "spare", 00:33:09.443 "uuid": "18a13a3e-c92f-5348-835d-1d9afbbecb16", 00:33:09.443 "is_configured": true, 00:33:09.443 "data_offset": 0, 00:33:09.443 "data_size": 65536 00:33:09.443 }, 00:33:09.443 { 00:33:09.443 "name": "BaseBdev2", 00:33:09.443 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:33:09.443 "is_configured": true, 00:33:09.443 "data_offset": 0, 00:33:09.443 "data_size": 65536 00:33:09.443 }, 00:33:09.443 { 00:33:09.443 "name": "BaseBdev3", 00:33:09.443 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:33:09.443 "is_configured": true, 00:33:09.443 "data_offset": 0, 00:33:09.443 "data_size": 65536 00:33:09.443 }, 00:33:09.443 { 00:33:09.443 "name": "BaseBdev4", 00:33:09.443 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:33:09.443 "is_configured": true, 00:33:09.443 "data_offset": 0, 00:33:09.443 "data_size": 65536 00:33:09.443 } 00:33:09.443 ] 00:33:09.443 }' 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:09.443 13:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:10.422 [2024-10-28 13:44:24.505107] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:10.422 [2024-10-28 13:44:24.505209] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:10.422 [2024-10-28 13:44:24.505306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:10.422 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:10.422 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:10.422 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:10.422 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:10.422 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:10.422 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:10.422 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.422 13:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.422 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:10.422 13:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.681 13:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.681 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:10.681 "name": "raid_bdev1", 00:33:10.681 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:33:10.681 "strip_size_kb": 64, 00:33:10.681 "state": "online", 00:33:10.681 "raid_level": "raid5f", 00:33:10.681 "superblock": false, 00:33:10.681 "num_base_bdevs": 4, 00:33:10.681 "num_base_bdevs_discovered": 4, 00:33:10.681 "num_base_bdevs_operational": 4, 00:33:10.681 "base_bdevs_list": [ 00:33:10.681 { 00:33:10.681 "name": "spare", 00:33:10.681 "uuid": "18a13a3e-c92f-5348-835d-1d9afbbecb16", 00:33:10.681 "is_configured": true, 00:33:10.681 "data_offset": 0, 00:33:10.681 "data_size": 65536 00:33:10.681 }, 00:33:10.681 { 00:33:10.681 "name": "BaseBdev2", 00:33:10.681 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:33:10.681 "is_configured": true, 00:33:10.681 "data_offset": 0, 00:33:10.681 "data_size": 65536 00:33:10.681 }, 00:33:10.681 { 00:33:10.681 "name": "BaseBdev3", 00:33:10.681 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:33:10.681 "is_configured": true, 00:33:10.681 "data_offset": 0, 00:33:10.681 "data_size": 65536 00:33:10.681 }, 00:33:10.681 { 00:33:10.681 "name": "BaseBdev4", 00:33:10.681 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:33:10.681 "is_configured": true, 00:33:10.681 "data_offset": 0, 00:33:10.681 "data_size": 65536 00:33:10.681 } 00:33:10.681 ] 00:33:10.681 }' 00:33:10.681 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:10.681 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:10.681 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:10.681 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:10.681 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:33:10.681 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:10.681 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:10.681 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:10.681 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:10.682 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:10.682 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.682 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:10.682 13:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.682 13:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.682 13:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.682 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:10.682 "name": "raid_bdev1", 00:33:10.682 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:33:10.682 "strip_size_kb": 64, 00:33:10.682 "state": "online", 00:33:10.682 "raid_level": "raid5f", 00:33:10.682 "superblock": false, 00:33:10.682 "num_base_bdevs": 4, 00:33:10.682 "num_base_bdevs_discovered": 4, 00:33:10.682 "num_base_bdevs_operational": 4, 00:33:10.682 "base_bdevs_list": [ 00:33:10.682 { 00:33:10.682 "name": "spare", 00:33:10.682 "uuid": "18a13a3e-c92f-5348-835d-1d9afbbecb16", 00:33:10.682 "is_configured": true, 00:33:10.682 "data_offset": 0, 00:33:10.682 "data_size": 65536 00:33:10.682 }, 00:33:10.682 { 00:33:10.682 "name": "BaseBdev2", 00:33:10.682 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:33:10.682 "is_configured": true, 00:33:10.682 "data_offset": 0, 00:33:10.682 "data_size": 65536 00:33:10.682 }, 00:33:10.682 { 00:33:10.682 "name": "BaseBdev3", 00:33:10.682 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:33:10.682 "is_configured": true, 00:33:10.682 "data_offset": 0, 00:33:10.682 "data_size": 65536 00:33:10.682 }, 00:33:10.682 { 00:33:10.682 "name": "BaseBdev4", 00:33:10.682 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:33:10.682 "is_configured": true, 00:33:10.682 "data_offset": 0, 00:33:10.682 "data_size": 65536 00:33:10.682 } 00:33:10.682 ] 00:33:10.682 }' 00:33:10.682 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:10.682 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:10.682 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.941 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:10.941 "name": "raid_bdev1", 00:33:10.941 "uuid": "77785d93-4fcf-4345-a963-4a1a29ab4926", 00:33:10.941 "strip_size_kb": 64, 00:33:10.941 "state": "online", 00:33:10.941 "raid_level": "raid5f", 00:33:10.941 "superblock": false, 00:33:10.941 "num_base_bdevs": 4, 00:33:10.941 "num_base_bdevs_discovered": 4, 00:33:10.941 "num_base_bdevs_operational": 4, 00:33:10.941 "base_bdevs_list": [ 00:33:10.941 { 00:33:10.941 "name": "spare", 00:33:10.941 "uuid": "18a13a3e-c92f-5348-835d-1d9afbbecb16", 00:33:10.941 "is_configured": true, 00:33:10.941 "data_offset": 0, 00:33:10.941 "data_size": 65536 00:33:10.941 }, 00:33:10.941 { 00:33:10.941 "name": "BaseBdev2", 00:33:10.941 "uuid": "1b917dd3-bcf6-578f-80f3-e813e46308e2", 00:33:10.941 "is_configured": true, 00:33:10.941 "data_offset": 0, 00:33:10.941 "data_size": 65536 00:33:10.941 }, 00:33:10.941 { 00:33:10.941 "name": "BaseBdev3", 00:33:10.941 "uuid": "710376bd-05be-59f8-9898-be101bd7c083", 00:33:10.941 "is_configured": true, 00:33:10.941 "data_offset": 0, 00:33:10.941 "data_size": 65536 00:33:10.941 }, 00:33:10.941 { 00:33:10.941 "name": "BaseBdev4", 00:33:10.941 "uuid": "e044bd43-179c-5f18-b895-1af9dde534f4", 00:33:10.941 "is_configured": true, 00:33:10.941 "data_offset": 0, 00:33:10.942 "data_size": 65536 00:33:10.942 } 00:33:10.942 ] 00:33:10.942 }' 00:33:10.942 13:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:10.942 13:44:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.508 [2024-10-28 13:44:25.423901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:11.508 [2024-10-28 13:44:25.423943] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:11.508 [2024-10-28 13:44:25.424052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:11.508 [2024-10-28 13:44:25.424196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:11.508 [2024-10-28 13:44:25.424217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:11.508 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:11.767 /dev/nbd0 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:11.767 1+0 records in 00:33:11.767 1+0 records out 00:33:11.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529742 s, 7.7 MB/s 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:11.767 13:44:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:33:12.026 /dev/nbd1 00:33:12.026 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:12.026 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:12.027 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:33:12.027 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:33:12.027 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:12.027 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:12.027 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:33:12.027 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:33:12.027 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:12.027 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:12.027 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:12.027 1+0 records in 00:33:12.027 1+0 records out 00:33:12.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395107 s, 10.4 MB/s 00:33:12.027 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:12.027 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:33:12.027 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:12.286 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:12.286 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:33:12.286 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:12.286 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:12.286 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:12.286 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:33:12.286 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:12.286 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:12.286 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:12.286 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:33:12.286 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:12.286 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:12.544 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:12.544 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:12.544 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:12.544 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:12.544 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:12.544 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:12.544 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:12.544 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:12.544 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:12.544 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 97326 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 97326 ']' 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 97326 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97326 00:33:12.803 killing process with pid 97326 00:33:12.803 Received shutdown signal, test time was about 60.000000 seconds 00:33:12.803 00:33:12.803 Latency(us) 00:33:12.803 [2024-10-28T13:44:26.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.803 [2024-10-28T13:44:26.963Z] =================================================================================================================== 00:33:12.803 [2024-10-28T13:44:26.963Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97326' 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 97326 00:33:12.803 [2024-10-28 13:44:26.912214] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:12.803 13:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 97326 00:33:13.062 [2024-10-28 13:44:26.963817] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:13.062 ************************************ 00:33:13.062 END TEST raid5f_rebuild_test 00:33:13.062 ************************************ 00:33:13.062 13:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:33:13.062 00:33:13.062 real 0m18.020s 00:33:13.062 user 0m22.795s 00:33:13.062 sys 0m2.230s 00:33:13.062 13:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:13.062 13:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:13.321 13:44:27 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:33:13.321 13:44:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:33:13.321 13:44:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:13.321 13:44:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:13.321 ************************************ 00:33:13.321 START TEST raid5f_rebuild_test_sb 00:33:13.321 ************************************ 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:33:13.321 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=97818 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 97818 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 97818 ']' 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:13.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:13.322 13:44:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.322 [2024-10-28 13:44:27.348689] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:33:13.322 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:13.322 Zero copy mechanism will not be used. 00:33:13.322 [2024-10-28 13:44:27.348867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97818 ] 00:33:13.581 [2024-10-28 13:44:27.493014] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:13.581 [2024-10-28 13:44:27.518130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.581 [2024-10-28 13:44:27.562788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.581 [2024-10-28 13:44:27.620807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:13.581 [2024-10-28 13:44:27.620861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.148 BaseBdev1_malloc 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.148 [2024-10-28 13:44:28.295763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:14.148 [2024-10-28 13:44:28.295872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.148 [2024-10-28 13:44:28.295913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:14.148 [2024-10-28 13:44:28.295944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.148 [2024-10-28 13:44:28.298888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.148 [2024-10-28 13:44:28.298955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:14.148 BaseBdev1 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.148 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.408 BaseBdev2_malloc 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.408 [2024-10-28 13:44:28.319834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:14.408 [2024-10-28 13:44:28.319906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.408 [2024-10-28 13:44:28.319934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:14.408 [2024-10-28 13:44:28.319952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.408 [2024-10-28 13:44:28.322775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.408 [2024-10-28 13:44:28.322858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:14.408 BaseBdev2 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.408 BaseBdev3_malloc 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.408 [2024-10-28 13:44:28.347807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:14.408 [2024-10-28 13:44:28.347892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.408 [2024-10-28 13:44:28.347922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:14.408 [2024-10-28 13:44:28.347940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.408 [2024-10-28 13:44:28.350788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.408 [2024-10-28 13:44:28.350857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:14.408 BaseBdev3 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.408 BaseBdev4_malloc 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.408 [2024-10-28 13:44:28.390039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:33:14.408 [2024-10-28 13:44:28.390120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.408 [2024-10-28 13:44:28.390178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:33:14.408 [2024-10-28 13:44:28.390199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.408 [2024-10-28 13:44:28.392987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.408 [2024-10-28 13:44:28.393052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:14.408 BaseBdev4 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.408 spare_malloc 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.408 spare_delay 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.408 [2024-10-28 13:44:28.429946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:14.408 [2024-10-28 13:44:28.430033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:14.408 [2024-10-28 13:44:28.430061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:33:14.408 [2024-10-28 13:44:28.430078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:14.408 [2024-10-28 13:44:28.432996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:14.408 [2024-10-28 13:44:28.433061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:14.408 spare 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.408 [2024-10-28 13:44:28.438046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:14.408 [2024-10-28 13:44:28.440618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:14.408 [2024-10-28 13:44:28.440705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:14.408 [2024-10-28 13:44:28.440805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:14.408 [2024-10-28 13:44:28.441052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:33:14.408 [2024-10-28 13:44:28.441077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:14.408 [2024-10-28 13:44:28.441426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:14.408 [2024-10-28 13:44:28.442014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:33:14.408 [2024-10-28 13:44:28.442058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:33:14.408 [2024-10-28 13:44:28.442311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.408 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.409 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:14.409 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.409 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.409 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:14.409 "name": "raid_bdev1", 00:33:14.409 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:14.409 "strip_size_kb": 64, 00:33:14.409 "state": "online", 00:33:14.409 "raid_level": "raid5f", 00:33:14.409 "superblock": true, 00:33:14.409 "num_base_bdevs": 4, 00:33:14.409 "num_base_bdevs_discovered": 4, 00:33:14.409 "num_base_bdevs_operational": 4, 00:33:14.409 "base_bdevs_list": [ 00:33:14.409 { 00:33:14.409 "name": "BaseBdev1", 00:33:14.409 "uuid": "9f4a0954-93b2-5a40-9f74-8ff8ce537218", 00:33:14.409 "is_configured": true, 00:33:14.409 "data_offset": 2048, 00:33:14.409 "data_size": 63488 00:33:14.409 }, 00:33:14.409 { 00:33:14.409 "name": "BaseBdev2", 00:33:14.409 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:14.409 "is_configured": true, 00:33:14.409 "data_offset": 2048, 00:33:14.409 "data_size": 63488 00:33:14.409 }, 00:33:14.409 { 00:33:14.409 "name": "BaseBdev3", 00:33:14.409 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:14.409 "is_configured": true, 00:33:14.409 "data_offset": 2048, 00:33:14.409 "data_size": 63488 00:33:14.409 }, 00:33:14.409 { 00:33:14.409 "name": "BaseBdev4", 00:33:14.409 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:14.409 "is_configured": true, 00:33:14.409 "data_offset": 2048, 00:33:14.409 "data_size": 63488 00:33:14.409 } 00:33:14.409 ] 00:33:14.409 }' 00:33:14.409 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:14.409 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.002 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:15.003 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.003 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:33:15.003 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.003 [2024-10-28 13:44:28.966677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:15.003 13:44:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:15.003 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:15.260 [2024-10-28 13:44:29.358618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:33:15.260 /dev/nbd0 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:15.260 1+0 records in 00:33:15.260 1+0 records out 00:33:15.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003319 s, 12.3 MB/s 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:15.260 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:15.261 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:33:15.261 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:33:15.261 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:33:15.261 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:33:16.195 496+0 records in 00:33:16.195 496+0 records out 00:33:16.195 97517568 bytes (98 MB, 93 MiB) copied, 0.568816 s, 171 MB/s 00:33:16.195 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:33:16.195 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:16.195 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:16.195 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:16.195 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:33:16.195 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:16.195 13:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:16.195 [2024-10-28 13:44:30.243389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.195 [2024-10-28 13:44:30.259503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:16.195 "name": "raid_bdev1", 00:33:16.195 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:16.195 "strip_size_kb": 64, 00:33:16.195 "state": "online", 00:33:16.195 "raid_level": "raid5f", 00:33:16.195 "superblock": true, 00:33:16.195 "num_base_bdevs": 4, 00:33:16.195 "num_base_bdevs_discovered": 3, 00:33:16.195 "num_base_bdevs_operational": 3, 00:33:16.195 "base_bdevs_list": [ 00:33:16.195 { 00:33:16.195 "name": null, 00:33:16.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:16.195 "is_configured": false, 00:33:16.195 "data_offset": 0, 00:33:16.195 "data_size": 63488 00:33:16.195 }, 00:33:16.195 { 00:33:16.195 "name": "BaseBdev2", 00:33:16.195 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:16.195 "is_configured": true, 00:33:16.195 "data_offset": 2048, 00:33:16.195 "data_size": 63488 00:33:16.195 }, 00:33:16.195 { 00:33:16.195 "name": "BaseBdev3", 00:33:16.195 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:16.195 "is_configured": true, 00:33:16.195 "data_offset": 2048, 00:33:16.195 "data_size": 63488 00:33:16.195 }, 00:33:16.195 { 00:33:16.195 "name": "BaseBdev4", 00:33:16.195 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:16.195 "is_configured": true, 00:33:16.195 "data_offset": 2048, 00:33:16.195 "data_size": 63488 00:33:16.195 } 00:33:16.195 ] 00:33:16.195 }' 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:16.195 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.766 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:16.766 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.766 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.766 [2024-10-28 13:44:30.755675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:16.766 [2024-10-28 13:44:30.761841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ae60 00:33:16.766 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.766 13:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:33:16.766 [2024-10-28 13:44:30.764916] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:17.702 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:17.702 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:17.702 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:17.702 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:17.702 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:17.702 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:17.702 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:17.702 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.702 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.702 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.702 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:17.702 "name": "raid_bdev1", 00:33:17.702 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:17.702 "strip_size_kb": 64, 00:33:17.702 "state": "online", 00:33:17.702 "raid_level": "raid5f", 00:33:17.702 "superblock": true, 00:33:17.702 "num_base_bdevs": 4, 00:33:17.702 "num_base_bdevs_discovered": 4, 00:33:17.702 "num_base_bdevs_operational": 4, 00:33:17.702 "process": { 00:33:17.702 "type": "rebuild", 00:33:17.702 "target": "spare", 00:33:17.702 "progress": { 00:33:17.702 "blocks": 19200, 00:33:17.702 "percent": 10 00:33:17.702 } 00:33:17.702 }, 00:33:17.702 "base_bdevs_list": [ 00:33:17.702 { 00:33:17.702 "name": "spare", 00:33:17.702 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:17.702 "is_configured": true, 00:33:17.702 "data_offset": 2048, 00:33:17.702 "data_size": 63488 00:33:17.702 }, 00:33:17.702 { 00:33:17.702 "name": "BaseBdev2", 00:33:17.702 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:17.702 "is_configured": true, 00:33:17.702 "data_offset": 2048, 00:33:17.702 "data_size": 63488 00:33:17.702 }, 00:33:17.702 { 00:33:17.702 "name": "BaseBdev3", 00:33:17.702 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:17.702 "is_configured": true, 00:33:17.702 "data_offset": 2048, 00:33:17.702 "data_size": 63488 00:33:17.702 }, 00:33:17.702 { 00:33:17.702 "name": "BaseBdev4", 00:33:17.702 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:17.702 "is_configured": true, 00:33:17.702 "data_offset": 2048, 00:33:17.702 "data_size": 63488 00:33:17.702 } 00:33:17.702 ] 00:33:17.702 }' 00:33:17.702 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.962 [2024-10-28 13:44:31.930080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:17.962 [2024-10-28 13:44:31.975921] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:17.962 [2024-10-28 13:44:31.976039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:17.962 [2024-10-28 13:44:31.976065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:17.962 [2024-10-28 13:44:31.976096] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.962 13:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:17.962 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.962 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:17.962 "name": "raid_bdev1", 00:33:17.962 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:17.962 "strip_size_kb": 64, 00:33:17.962 "state": "online", 00:33:17.962 "raid_level": "raid5f", 00:33:17.962 "superblock": true, 00:33:17.962 "num_base_bdevs": 4, 00:33:17.962 "num_base_bdevs_discovered": 3, 00:33:17.962 "num_base_bdevs_operational": 3, 00:33:17.962 "base_bdevs_list": [ 00:33:17.962 { 00:33:17.962 "name": null, 00:33:17.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:17.962 "is_configured": false, 00:33:17.962 "data_offset": 0, 00:33:17.962 "data_size": 63488 00:33:17.962 }, 00:33:17.962 { 00:33:17.962 "name": "BaseBdev2", 00:33:17.962 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:17.963 "is_configured": true, 00:33:17.963 "data_offset": 2048, 00:33:17.963 "data_size": 63488 00:33:17.963 }, 00:33:17.963 { 00:33:17.963 "name": "BaseBdev3", 00:33:17.963 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:17.963 "is_configured": true, 00:33:17.963 "data_offset": 2048, 00:33:17.963 "data_size": 63488 00:33:17.963 }, 00:33:17.963 { 00:33:17.963 "name": "BaseBdev4", 00:33:17.963 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:17.963 "is_configured": true, 00:33:17.963 "data_offset": 2048, 00:33:17.963 "data_size": 63488 00:33:17.963 } 00:33:17.963 ] 00:33:17.963 }' 00:33:17.963 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:17.963 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:18.530 "name": "raid_bdev1", 00:33:18.530 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:18.530 "strip_size_kb": 64, 00:33:18.530 "state": "online", 00:33:18.530 "raid_level": "raid5f", 00:33:18.530 "superblock": true, 00:33:18.530 "num_base_bdevs": 4, 00:33:18.530 "num_base_bdevs_discovered": 3, 00:33:18.530 "num_base_bdevs_operational": 3, 00:33:18.530 "base_bdevs_list": [ 00:33:18.530 { 00:33:18.530 "name": null, 00:33:18.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:18.530 "is_configured": false, 00:33:18.530 "data_offset": 0, 00:33:18.530 "data_size": 63488 00:33:18.530 }, 00:33:18.530 { 00:33:18.530 "name": "BaseBdev2", 00:33:18.530 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:18.530 "is_configured": true, 00:33:18.530 "data_offset": 2048, 00:33:18.530 "data_size": 63488 00:33:18.530 }, 00:33:18.530 { 00:33:18.530 "name": "BaseBdev3", 00:33:18.530 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:18.530 "is_configured": true, 00:33:18.530 "data_offset": 2048, 00:33:18.530 "data_size": 63488 00:33:18.530 }, 00:33:18.530 { 00:33:18.530 "name": "BaseBdev4", 00:33:18.530 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:18.530 "is_configured": true, 00:33:18.530 "data_offset": 2048, 00:33:18.530 "data_size": 63488 00:33:18.530 } 00:33:18.530 ] 00:33:18.530 }' 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.530 [2024-10-28 13:44:32.663691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:18.530 [2024-10-28 13:44:32.669563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.530 13:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:33:18.530 [2024-10-28 13:44:32.672560] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:19.913 "name": "raid_bdev1", 00:33:19.913 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:19.913 "strip_size_kb": 64, 00:33:19.913 "state": "online", 00:33:19.913 "raid_level": "raid5f", 00:33:19.913 "superblock": true, 00:33:19.913 "num_base_bdevs": 4, 00:33:19.913 "num_base_bdevs_discovered": 4, 00:33:19.913 "num_base_bdevs_operational": 4, 00:33:19.913 "process": { 00:33:19.913 "type": "rebuild", 00:33:19.913 "target": "spare", 00:33:19.913 "progress": { 00:33:19.913 "blocks": 19200, 00:33:19.913 "percent": 10 00:33:19.913 } 00:33:19.913 }, 00:33:19.913 "base_bdevs_list": [ 00:33:19.913 { 00:33:19.913 "name": "spare", 00:33:19.913 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:19.913 "is_configured": true, 00:33:19.913 "data_offset": 2048, 00:33:19.913 "data_size": 63488 00:33:19.913 }, 00:33:19.913 { 00:33:19.913 "name": "BaseBdev2", 00:33:19.913 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:19.913 "is_configured": true, 00:33:19.913 "data_offset": 2048, 00:33:19.913 "data_size": 63488 00:33:19.913 }, 00:33:19.913 { 00:33:19.913 "name": "BaseBdev3", 00:33:19.913 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:19.913 "is_configured": true, 00:33:19.913 "data_offset": 2048, 00:33:19.913 "data_size": 63488 00:33:19.913 }, 00:33:19.913 { 00:33:19.913 "name": "BaseBdev4", 00:33:19.913 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:19.913 "is_configured": true, 00:33:19.913 "data_offset": 2048, 00:33:19.913 "data_size": 63488 00:33:19.913 } 00:33:19.913 ] 00:33:19.913 }' 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:33:19.913 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=607 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:19.913 "name": "raid_bdev1", 00:33:19.913 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:19.913 "strip_size_kb": 64, 00:33:19.913 "state": "online", 00:33:19.913 "raid_level": "raid5f", 00:33:19.913 "superblock": true, 00:33:19.913 "num_base_bdevs": 4, 00:33:19.913 "num_base_bdevs_discovered": 4, 00:33:19.913 "num_base_bdevs_operational": 4, 00:33:19.913 "process": { 00:33:19.913 "type": "rebuild", 00:33:19.913 "target": "spare", 00:33:19.913 "progress": { 00:33:19.913 "blocks": 21120, 00:33:19.913 "percent": 11 00:33:19.913 } 00:33:19.913 }, 00:33:19.913 "base_bdevs_list": [ 00:33:19.913 { 00:33:19.913 "name": "spare", 00:33:19.913 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:19.913 "is_configured": true, 00:33:19.913 "data_offset": 2048, 00:33:19.913 "data_size": 63488 00:33:19.913 }, 00:33:19.913 { 00:33:19.913 "name": "BaseBdev2", 00:33:19.913 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:19.913 "is_configured": true, 00:33:19.913 "data_offset": 2048, 00:33:19.913 "data_size": 63488 00:33:19.913 }, 00:33:19.913 { 00:33:19.913 "name": "BaseBdev3", 00:33:19.913 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:19.913 "is_configured": true, 00:33:19.913 "data_offset": 2048, 00:33:19.913 "data_size": 63488 00:33:19.913 }, 00:33:19.913 { 00:33:19.913 "name": "BaseBdev4", 00:33:19.913 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:19.913 "is_configured": true, 00:33:19.913 "data_offset": 2048, 00:33:19.913 "data_size": 63488 00:33:19.913 } 00:33:19.913 ] 00:33:19.913 }' 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:19.913 13:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:20.849 13:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:20.849 13:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:20.849 13:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:20.849 13:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:20.849 13:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:20.849 13:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:20.849 13:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.849 13:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.849 13:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.849 13:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.107 13:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.108 13:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:21.108 "name": "raid_bdev1", 00:33:21.108 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:21.108 "strip_size_kb": 64, 00:33:21.108 "state": "online", 00:33:21.108 "raid_level": "raid5f", 00:33:21.108 "superblock": true, 00:33:21.108 "num_base_bdevs": 4, 00:33:21.108 "num_base_bdevs_discovered": 4, 00:33:21.108 "num_base_bdevs_operational": 4, 00:33:21.108 "process": { 00:33:21.108 "type": "rebuild", 00:33:21.108 "target": "spare", 00:33:21.108 "progress": { 00:33:21.108 "blocks": 44160, 00:33:21.108 "percent": 23 00:33:21.108 } 00:33:21.108 }, 00:33:21.108 "base_bdevs_list": [ 00:33:21.108 { 00:33:21.108 "name": "spare", 00:33:21.108 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:21.108 "is_configured": true, 00:33:21.108 "data_offset": 2048, 00:33:21.108 "data_size": 63488 00:33:21.108 }, 00:33:21.108 { 00:33:21.108 "name": "BaseBdev2", 00:33:21.108 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:21.108 "is_configured": true, 00:33:21.108 "data_offset": 2048, 00:33:21.108 "data_size": 63488 00:33:21.108 }, 00:33:21.108 { 00:33:21.108 "name": "BaseBdev3", 00:33:21.108 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:21.108 "is_configured": true, 00:33:21.108 "data_offset": 2048, 00:33:21.108 "data_size": 63488 00:33:21.108 }, 00:33:21.108 { 00:33:21.108 "name": "BaseBdev4", 00:33:21.108 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:21.108 "is_configured": true, 00:33:21.108 "data_offset": 2048, 00:33:21.108 "data_size": 63488 00:33:21.108 } 00:33:21.108 ] 00:33:21.108 }' 00:33:21.108 13:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:21.108 13:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:21.108 13:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:21.108 13:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:21.108 13:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:22.045 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:22.045 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:22.045 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:22.045 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:22.045 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:22.045 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:22.045 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:22.045 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.045 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.045 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:22.045 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.304 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:22.304 "name": "raid_bdev1", 00:33:22.304 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:22.304 "strip_size_kb": 64, 00:33:22.304 "state": "online", 00:33:22.304 "raid_level": "raid5f", 00:33:22.304 "superblock": true, 00:33:22.304 "num_base_bdevs": 4, 00:33:22.304 "num_base_bdevs_discovered": 4, 00:33:22.304 "num_base_bdevs_operational": 4, 00:33:22.304 "process": { 00:33:22.304 "type": "rebuild", 00:33:22.304 "target": "spare", 00:33:22.304 "progress": { 00:33:22.304 "blocks": 65280, 00:33:22.304 "percent": 34 00:33:22.304 } 00:33:22.304 }, 00:33:22.304 "base_bdevs_list": [ 00:33:22.304 { 00:33:22.304 "name": "spare", 00:33:22.304 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:22.304 "is_configured": true, 00:33:22.304 "data_offset": 2048, 00:33:22.304 "data_size": 63488 00:33:22.304 }, 00:33:22.304 { 00:33:22.304 "name": "BaseBdev2", 00:33:22.304 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:22.304 "is_configured": true, 00:33:22.304 "data_offset": 2048, 00:33:22.304 "data_size": 63488 00:33:22.304 }, 00:33:22.304 { 00:33:22.304 "name": "BaseBdev3", 00:33:22.304 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:22.304 "is_configured": true, 00:33:22.304 "data_offset": 2048, 00:33:22.304 "data_size": 63488 00:33:22.304 }, 00:33:22.304 { 00:33:22.304 "name": "BaseBdev4", 00:33:22.304 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:22.304 "is_configured": true, 00:33:22.304 "data_offset": 2048, 00:33:22.304 "data_size": 63488 00:33:22.304 } 00:33:22.304 ] 00:33:22.304 }' 00:33:22.304 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:22.304 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:22.304 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:22.304 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:22.304 13:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:23.241 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:23.241 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:23.241 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:23.241 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:23.241 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:23.241 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:23.241 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:23.241 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.241 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:23.241 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.241 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.241 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:23.241 "name": "raid_bdev1", 00:33:23.241 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:23.241 "strip_size_kb": 64, 00:33:23.241 "state": "online", 00:33:23.241 "raid_level": "raid5f", 00:33:23.241 "superblock": true, 00:33:23.241 "num_base_bdevs": 4, 00:33:23.241 "num_base_bdevs_discovered": 4, 00:33:23.241 "num_base_bdevs_operational": 4, 00:33:23.241 "process": { 00:33:23.241 "type": "rebuild", 00:33:23.241 "target": "spare", 00:33:23.241 "progress": { 00:33:23.241 "blocks": 88320, 00:33:23.241 "percent": 46 00:33:23.241 } 00:33:23.241 }, 00:33:23.241 "base_bdevs_list": [ 00:33:23.241 { 00:33:23.241 "name": "spare", 00:33:23.241 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:23.241 "is_configured": true, 00:33:23.241 "data_offset": 2048, 00:33:23.241 "data_size": 63488 00:33:23.241 }, 00:33:23.241 { 00:33:23.241 "name": "BaseBdev2", 00:33:23.241 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:23.241 "is_configured": true, 00:33:23.241 "data_offset": 2048, 00:33:23.241 "data_size": 63488 00:33:23.241 }, 00:33:23.241 { 00:33:23.241 "name": "BaseBdev3", 00:33:23.241 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:23.241 "is_configured": true, 00:33:23.241 "data_offset": 2048, 00:33:23.241 "data_size": 63488 00:33:23.241 }, 00:33:23.241 { 00:33:23.241 "name": "BaseBdev4", 00:33:23.241 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:23.241 "is_configured": true, 00:33:23.241 "data_offset": 2048, 00:33:23.241 "data_size": 63488 00:33:23.241 } 00:33:23.241 ] 00:33:23.241 }' 00:33:23.241 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:23.500 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:23.500 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:23.500 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:23.500 13:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:24.436 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:24.436 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:24.436 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:24.436 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:24.436 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:24.436 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:24.436 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:24.436 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.436 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.436 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:24.436 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.436 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:24.436 "name": "raid_bdev1", 00:33:24.436 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:24.436 "strip_size_kb": 64, 00:33:24.436 "state": "online", 00:33:24.436 "raid_level": "raid5f", 00:33:24.436 "superblock": true, 00:33:24.436 "num_base_bdevs": 4, 00:33:24.436 "num_base_bdevs_discovered": 4, 00:33:24.436 "num_base_bdevs_operational": 4, 00:33:24.436 "process": { 00:33:24.436 "type": "rebuild", 00:33:24.436 "target": "spare", 00:33:24.436 "progress": { 00:33:24.436 "blocks": 109440, 00:33:24.436 "percent": 57 00:33:24.436 } 00:33:24.436 }, 00:33:24.436 "base_bdevs_list": [ 00:33:24.436 { 00:33:24.436 "name": "spare", 00:33:24.436 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:24.436 "is_configured": true, 00:33:24.436 "data_offset": 2048, 00:33:24.436 "data_size": 63488 00:33:24.436 }, 00:33:24.436 { 00:33:24.436 "name": "BaseBdev2", 00:33:24.436 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:24.436 "is_configured": true, 00:33:24.436 "data_offset": 2048, 00:33:24.436 "data_size": 63488 00:33:24.436 }, 00:33:24.436 { 00:33:24.436 "name": "BaseBdev3", 00:33:24.436 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:24.436 "is_configured": true, 00:33:24.436 "data_offset": 2048, 00:33:24.436 "data_size": 63488 00:33:24.436 }, 00:33:24.436 { 00:33:24.436 "name": "BaseBdev4", 00:33:24.436 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:24.436 "is_configured": true, 00:33:24.436 "data_offset": 2048, 00:33:24.436 "data_size": 63488 00:33:24.436 } 00:33:24.436 ] 00:33:24.436 }' 00:33:24.436 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:24.695 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:24.695 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:24.695 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:24.695 13:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:25.630 "name": "raid_bdev1", 00:33:25.630 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:25.630 "strip_size_kb": 64, 00:33:25.630 "state": "online", 00:33:25.630 "raid_level": "raid5f", 00:33:25.630 "superblock": true, 00:33:25.630 "num_base_bdevs": 4, 00:33:25.630 "num_base_bdevs_discovered": 4, 00:33:25.630 "num_base_bdevs_operational": 4, 00:33:25.630 "process": { 00:33:25.630 "type": "rebuild", 00:33:25.630 "target": "spare", 00:33:25.630 "progress": { 00:33:25.630 "blocks": 132480, 00:33:25.630 "percent": 69 00:33:25.630 } 00:33:25.630 }, 00:33:25.630 "base_bdevs_list": [ 00:33:25.630 { 00:33:25.630 "name": "spare", 00:33:25.630 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:25.630 "is_configured": true, 00:33:25.630 "data_offset": 2048, 00:33:25.630 "data_size": 63488 00:33:25.630 }, 00:33:25.630 { 00:33:25.630 "name": "BaseBdev2", 00:33:25.630 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:25.630 "is_configured": true, 00:33:25.630 "data_offset": 2048, 00:33:25.630 "data_size": 63488 00:33:25.630 }, 00:33:25.630 { 00:33:25.630 "name": "BaseBdev3", 00:33:25.630 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:25.630 "is_configured": true, 00:33:25.630 "data_offset": 2048, 00:33:25.630 "data_size": 63488 00:33:25.630 }, 00:33:25.630 { 00:33:25.630 "name": "BaseBdev4", 00:33:25.630 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:25.630 "is_configured": true, 00:33:25.630 "data_offset": 2048, 00:33:25.630 "data_size": 63488 00:33:25.630 } 00:33:25.630 ] 00:33:25.630 }' 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:25.630 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:25.910 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:25.910 13:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:26.851 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:26.851 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:26.851 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:26.851 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:26.851 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:26.851 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:26.851 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:26.851 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.851 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:26.851 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.851 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.851 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:26.851 "name": "raid_bdev1", 00:33:26.851 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:26.851 "strip_size_kb": 64, 00:33:26.851 "state": "online", 00:33:26.851 "raid_level": "raid5f", 00:33:26.851 "superblock": true, 00:33:26.851 "num_base_bdevs": 4, 00:33:26.851 "num_base_bdevs_discovered": 4, 00:33:26.851 "num_base_bdevs_operational": 4, 00:33:26.851 "process": { 00:33:26.851 "type": "rebuild", 00:33:26.851 "target": "spare", 00:33:26.851 "progress": { 00:33:26.851 "blocks": 153600, 00:33:26.851 "percent": 80 00:33:26.851 } 00:33:26.851 }, 00:33:26.851 "base_bdevs_list": [ 00:33:26.851 { 00:33:26.851 "name": "spare", 00:33:26.851 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:26.851 "is_configured": true, 00:33:26.851 "data_offset": 2048, 00:33:26.851 "data_size": 63488 00:33:26.851 }, 00:33:26.851 { 00:33:26.851 "name": "BaseBdev2", 00:33:26.851 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:26.851 "is_configured": true, 00:33:26.851 "data_offset": 2048, 00:33:26.851 "data_size": 63488 00:33:26.851 }, 00:33:26.851 { 00:33:26.851 "name": "BaseBdev3", 00:33:26.851 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:26.851 "is_configured": true, 00:33:26.851 "data_offset": 2048, 00:33:26.851 "data_size": 63488 00:33:26.851 }, 00:33:26.851 { 00:33:26.851 "name": "BaseBdev4", 00:33:26.851 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:26.851 "is_configured": true, 00:33:26.851 "data_offset": 2048, 00:33:26.851 "data_size": 63488 00:33:26.851 } 00:33:26.851 ] 00:33:26.851 }' 00:33:26.851 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:26.851 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:26.852 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:26.852 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:26.852 13:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:28.229 13:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:28.229 13:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:28.229 13:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:28.229 13:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:28.229 13:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:28.229 13:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:28.229 13:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:28.229 13:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.229 13:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:28.229 13:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:28.229 13:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.229 13:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:28.229 "name": "raid_bdev1", 00:33:28.229 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:28.229 "strip_size_kb": 64, 00:33:28.229 "state": "online", 00:33:28.229 "raid_level": "raid5f", 00:33:28.229 "superblock": true, 00:33:28.229 "num_base_bdevs": 4, 00:33:28.229 "num_base_bdevs_discovered": 4, 00:33:28.229 "num_base_bdevs_operational": 4, 00:33:28.229 "process": { 00:33:28.229 "type": "rebuild", 00:33:28.229 "target": "spare", 00:33:28.229 "progress": { 00:33:28.229 "blocks": 176640, 00:33:28.229 "percent": 92 00:33:28.229 } 00:33:28.229 }, 00:33:28.229 "base_bdevs_list": [ 00:33:28.229 { 00:33:28.229 "name": "spare", 00:33:28.229 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:28.229 "is_configured": true, 00:33:28.229 "data_offset": 2048, 00:33:28.229 "data_size": 63488 00:33:28.229 }, 00:33:28.229 { 00:33:28.229 "name": "BaseBdev2", 00:33:28.229 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:28.229 "is_configured": true, 00:33:28.229 "data_offset": 2048, 00:33:28.229 "data_size": 63488 00:33:28.229 }, 00:33:28.229 { 00:33:28.229 "name": "BaseBdev3", 00:33:28.229 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:28.229 "is_configured": true, 00:33:28.229 "data_offset": 2048, 00:33:28.229 "data_size": 63488 00:33:28.229 }, 00:33:28.229 { 00:33:28.229 "name": "BaseBdev4", 00:33:28.229 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:28.229 "is_configured": true, 00:33:28.229 "data_offset": 2048, 00:33:28.229 "data_size": 63488 00:33:28.229 } 00:33:28.229 ] 00:33:28.229 }' 00:33:28.229 13:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:28.229 13:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:28.229 13:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:28.229 13:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:28.229 13:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:28.796 [2024-10-28 13:44:42.758675] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:28.796 [2024-10-28 13:44:42.758762] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:28.796 [2024-10-28 13:44:42.758953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:29.054 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:29.054 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:29.054 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:29.054 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:29.054 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:29.054 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:29.054 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:29.054 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.054 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.054 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:29.054 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.054 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:29.054 "name": "raid_bdev1", 00:33:29.054 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:29.054 "strip_size_kb": 64, 00:33:29.054 "state": "online", 00:33:29.054 "raid_level": "raid5f", 00:33:29.054 "superblock": true, 00:33:29.054 "num_base_bdevs": 4, 00:33:29.054 "num_base_bdevs_discovered": 4, 00:33:29.054 "num_base_bdevs_operational": 4, 00:33:29.054 "base_bdevs_list": [ 00:33:29.054 { 00:33:29.054 "name": "spare", 00:33:29.054 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:29.054 "is_configured": true, 00:33:29.054 "data_offset": 2048, 00:33:29.054 "data_size": 63488 00:33:29.054 }, 00:33:29.054 { 00:33:29.054 "name": "BaseBdev2", 00:33:29.054 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:29.054 "is_configured": true, 00:33:29.054 "data_offset": 2048, 00:33:29.054 "data_size": 63488 00:33:29.054 }, 00:33:29.054 { 00:33:29.054 "name": "BaseBdev3", 00:33:29.054 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:29.054 "is_configured": true, 00:33:29.054 "data_offset": 2048, 00:33:29.054 "data_size": 63488 00:33:29.054 }, 00:33:29.054 { 00:33:29.054 "name": "BaseBdev4", 00:33:29.054 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:29.054 "is_configured": true, 00:33:29.054 "data_offset": 2048, 00:33:29.054 "data_size": 63488 00:33:29.054 } 00:33:29.054 ] 00:33:29.054 }' 00:33:29.054 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:29.313 "name": "raid_bdev1", 00:33:29.313 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:29.313 "strip_size_kb": 64, 00:33:29.313 "state": "online", 00:33:29.313 "raid_level": "raid5f", 00:33:29.313 "superblock": true, 00:33:29.313 "num_base_bdevs": 4, 00:33:29.313 "num_base_bdevs_discovered": 4, 00:33:29.313 "num_base_bdevs_operational": 4, 00:33:29.313 "base_bdevs_list": [ 00:33:29.313 { 00:33:29.313 "name": "spare", 00:33:29.313 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:29.313 "is_configured": true, 00:33:29.313 "data_offset": 2048, 00:33:29.313 "data_size": 63488 00:33:29.313 }, 00:33:29.313 { 00:33:29.313 "name": "BaseBdev2", 00:33:29.313 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:29.313 "is_configured": true, 00:33:29.313 "data_offset": 2048, 00:33:29.313 "data_size": 63488 00:33:29.313 }, 00:33:29.313 { 00:33:29.313 "name": "BaseBdev3", 00:33:29.313 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:29.313 "is_configured": true, 00:33:29.313 "data_offset": 2048, 00:33:29.313 "data_size": 63488 00:33:29.313 }, 00:33:29.313 { 00:33:29.313 "name": "BaseBdev4", 00:33:29.313 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:29.313 "is_configured": true, 00:33:29.313 "data_offset": 2048, 00:33:29.313 "data_size": 63488 00:33:29.313 } 00:33:29.313 ] 00:33:29.313 }' 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:29.313 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:29.571 "name": "raid_bdev1", 00:33:29.571 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:29.571 "strip_size_kb": 64, 00:33:29.571 "state": "online", 00:33:29.571 "raid_level": "raid5f", 00:33:29.571 "superblock": true, 00:33:29.571 "num_base_bdevs": 4, 00:33:29.571 "num_base_bdevs_discovered": 4, 00:33:29.571 "num_base_bdevs_operational": 4, 00:33:29.571 "base_bdevs_list": [ 00:33:29.571 { 00:33:29.571 "name": "spare", 00:33:29.571 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:29.571 "is_configured": true, 00:33:29.571 "data_offset": 2048, 00:33:29.571 "data_size": 63488 00:33:29.571 }, 00:33:29.571 { 00:33:29.571 "name": "BaseBdev2", 00:33:29.571 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:29.571 "is_configured": true, 00:33:29.571 "data_offset": 2048, 00:33:29.571 "data_size": 63488 00:33:29.571 }, 00:33:29.571 { 00:33:29.571 "name": "BaseBdev3", 00:33:29.571 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:29.571 "is_configured": true, 00:33:29.571 "data_offset": 2048, 00:33:29.571 "data_size": 63488 00:33:29.571 }, 00:33:29.571 { 00:33:29.571 "name": "BaseBdev4", 00:33:29.571 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:29.571 "is_configured": true, 00:33:29.571 "data_offset": 2048, 00:33:29.571 "data_size": 63488 00:33:29.571 } 00:33:29.571 ] 00:33:29.571 }' 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:29.571 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:30.139 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:30.139 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.139 13:44:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:30.139 [2024-10-28 13:44:43.998085] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:30.139 [2024-10-28 13:44:43.998306] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:30.139 [2024-10-28 13:44:43.998557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:30.139 [2024-10-28 13:44:43.998702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:30.139 [2024-10-28 13:44:43.998722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:30.139 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:30.398 /dev/nbd0 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:30.398 1+0 records in 00:33:30.398 1+0 records out 00:33:30.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597646 s, 6.9 MB/s 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:30.398 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:33:30.656 /dev/nbd1 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:30.656 1+0 records in 00:33:30.656 1+0 records out 00:33:30.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597857 s, 6.9 MB/s 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:30.656 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:30.915 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:33:30.915 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:30.915 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:30.915 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:30.915 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:33:30.915 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:30.915 13:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:31.174 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:31.174 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:31.174 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:31.174 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:31.174 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:31.174 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:31.174 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:31.174 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:31.174 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:31.174 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.432 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:31.432 [2024-10-28 13:44:45.516132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:31.432 [2024-10-28 13:44:45.516224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:31.432 [2024-10-28 13:44:45.516259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:33:31.432 [2024-10-28 13:44:45.516276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:31.432 [2024-10-28 13:44:45.519389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:31.432 [2024-10-28 13:44:45.519450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:31.432 [2024-10-28 13:44:45.519559] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:31.432 [2024-10-28 13:44:45.519612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:31.432 [2024-10-28 13:44:45.519796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:31.433 [2024-10-28 13:44:45.519975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:31.433 [2024-10-28 13:44:45.520119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:31.433 spare 00:33:31.433 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.433 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:33:31.433 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.433 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:31.690 [2024-10-28 13:44:45.620468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:31.690 [2024-10-28 13:44:45.620510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:31.690 [2024-10-28 13:44:45.620873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000495e0 00:33:31.690 [2024-10-28 13:44:45.621569] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:31.690 [2024-10-28 13:44:45.621605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:31.690 [2024-10-28 13:44:45.621813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:31.690 "name": "raid_bdev1", 00:33:31.690 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:31.690 "strip_size_kb": 64, 00:33:31.690 "state": "online", 00:33:31.690 "raid_level": "raid5f", 00:33:31.690 "superblock": true, 00:33:31.690 "num_base_bdevs": 4, 00:33:31.690 "num_base_bdevs_discovered": 4, 00:33:31.690 "num_base_bdevs_operational": 4, 00:33:31.690 "base_bdevs_list": [ 00:33:31.690 { 00:33:31.690 "name": "spare", 00:33:31.690 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:31.690 "is_configured": true, 00:33:31.690 "data_offset": 2048, 00:33:31.690 "data_size": 63488 00:33:31.690 }, 00:33:31.690 { 00:33:31.690 "name": "BaseBdev2", 00:33:31.690 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:31.690 "is_configured": true, 00:33:31.690 "data_offset": 2048, 00:33:31.690 "data_size": 63488 00:33:31.690 }, 00:33:31.690 { 00:33:31.690 "name": "BaseBdev3", 00:33:31.690 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:31.690 "is_configured": true, 00:33:31.690 "data_offset": 2048, 00:33:31.690 "data_size": 63488 00:33:31.690 }, 00:33:31.690 { 00:33:31.690 "name": "BaseBdev4", 00:33:31.690 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:31.690 "is_configured": true, 00:33:31.690 "data_offset": 2048, 00:33:31.690 "data_size": 63488 00:33:31.690 } 00:33:31.690 ] 00:33:31.690 }' 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:31.690 13:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:32.256 "name": "raid_bdev1", 00:33:32.256 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:32.256 "strip_size_kb": 64, 00:33:32.256 "state": "online", 00:33:32.256 "raid_level": "raid5f", 00:33:32.256 "superblock": true, 00:33:32.256 "num_base_bdevs": 4, 00:33:32.256 "num_base_bdevs_discovered": 4, 00:33:32.256 "num_base_bdevs_operational": 4, 00:33:32.256 "base_bdevs_list": [ 00:33:32.256 { 00:33:32.256 "name": "spare", 00:33:32.256 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:32.256 "is_configured": true, 00:33:32.256 "data_offset": 2048, 00:33:32.256 "data_size": 63488 00:33:32.256 }, 00:33:32.256 { 00:33:32.256 "name": "BaseBdev2", 00:33:32.256 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:32.256 "is_configured": true, 00:33:32.256 "data_offset": 2048, 00:33:32.256 "data_size": 63488 00:33:32.256 }, 00:33:32.256 { 00:33:32.256 "name": "BaseBdev3", 00:33:32.256 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:32.256 "is_configured": true, 00:33:32.256 "data_offset": 2048, 00:33:32.256 "data_size": 63488 00:33:32.256 }, 00:33:32.256 { 00:33:32.256 "name": "BaseBdev4", 00:33:32.256 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:32.256 "is_configured": true, 00:33:32.256 "data_offset": 2048, 00:33:32.256 "data_size": 63488 00:33:32.256 } 00:33:32.256 ] 00:33:32.256 }' 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:32.256 [2024-10-28 13:44:46.365078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:32.256 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:32.257 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:32.257 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:32.257 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.257 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.257 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:32.257 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.515 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:32.515 "name": "raid_bdev1", 00:33:32.515 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:32.515 "strip_size_kb": 64, 00:33:32.515 "state": "online", 00:33:32.515 "raid_level": "raid5f", 00:33:32.515 "superblock": true, 00:33:32.515 "num_base_bdevs": 4, 00:33:32.515 "num_base_bdevs_discovered": 3, 00:33:32.515 "num_base_bdevs_operational": 3, 00:33:32.515 "base_bdevs_list": [ 00:33:32.515 { 00:33:32.515 "name": null, 00:33:32.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.515 "is_configured": false, 00:33:32.515 "data_offset": 0, 00:33:32.515 "data_size": 63488 00:33:32.515 }, 00:33:32.515 { 00:33:32.515 "name": "BaseBdev2", 00:33:32.515 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:32.515 "is_configured": true, 00:33:32.515 "data_offset": 2048, 00:33:32.515 "data_size": 63488 00:33:32.515 }, 00:33:32.515 { 00:33:32.515 "name": "BaseBdev3", 00:33:32.515 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:32.515 "is_configured": true, 00:33:32.515 "data_offset": 2048, 00:33:32.515 "data_size": 63488 00:33:32.515 }, 00:33:32.515 { 00:33:32.515 "name": "BaseBdev4", 00:33:32.515 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:32.515 "is_configured": true, 00:33:32.515 "data_offset": 2048, 00:33:32.515 "data_size": 63488 00:33:32.515 } 00:33:32.515 ] 00:33:32.515 }' 00:33:32.515 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:32.515 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:32.773 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:32.773 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.773 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:32.773 [2024-10-28 13:44:46.897287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:32.773 [2024-10-28 13:44:46.897594] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:32.773 [2024-10-28 13:44:46.897628] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:32.773 [2024-10-28 13:44:46.897701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:32.773 [2024-10-28 13:44:46.903610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000496b0 00:33:32.773 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.773 13:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:33:32.773 [2024-10-28 13:44:46.906769] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:34.147 13:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:34.147 13:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:34.147 13:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:34.147 13:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:34.147 13:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:34.147 13:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:34.147 13:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.147 13:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.147 13:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:34.147 13:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.147 13:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:34.147 "name": "raid_bdev1", 00:33:34.147 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:34.147 "strip_size_kb": 64, 00:33:34.147 "state": "online", 00:33:34.147 "raid_level": "raid5f", 00:33:34.147 "superblock": true, 00:33:34.147 "num_base_bdevs": 4, 00:33:34.147 "num_base_bdevs_discovered": 4, 00:33:34.147 "num_base_bdevs_operational": 4, 00:33:34.147 "process": { 00:33:34.147 "type": "rebuild", 00:33:34.147 "target": "spare", 00:33:34.147 "progress": { 00:33:34.147 "blocks": 19200, 00:33:34.147 "percent": 10 00:33:34.147 } 00:33:34.147 }, 00:33:34.147 "base_bdevs_list": [ 00:33:34.147 { 00:33:34.147 "name": "spare", 00:33:34.147 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:34.147 "is_configured": true, 00:33:34.147 "data_offset": 2048, 00:33:34.147 "data_size": 63488 00:33:34.147 }, 00:33:34.147 { 00:33:34.147 "name": "BaseBdev2", 00:33:34.147 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:34.147 "is_configured": true, 00:33:34.147 "data_offset": 2048, 00:33:34.147 "data_size": 63488 00:33:34.147 }, 00:33:34.147 { 00:33:34.147 "name": "BaseBdev3", 00:33:34.147 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:34.147 "is_configured": true, 00:33:34.147 "data_offset": 2048, 00:33:34.147 "data_size": 63488 00:33:34.147 }, 00:33:34.147 { 00:33:34.147 "name": "BaseBdev4", 00:33:34.147 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:34.147 "is_configured": true, 00:33:34.147 "data_offset": 2048, 00:33:34.147 "data_size": 63488 00:33:34.147 } 00:33:34.147 ] 00:33:34.147 }' 00:33:34.147 13:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:34.147 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:34.147 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:34.147 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:34.147 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:33:34.147 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.147 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:34.147 [2024-10-28 13:44:48.081082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:34.147 [2024-10-28 13:44:48.117164] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:34.148 [2024-10-28 13:44:48.117273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:34.148 [2024-10-28 13:44:48.117301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:34.148 [2024-10-28 13:44:48.117318] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:34.148 "name": "raid_bdev1", 00:33:34.148 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:34.148 "strip_size_kb": 64, 00:33:34.148 "state": "online", 00:33:34.148 "raid_level": "raid5f", 00:33:34.148 "superblock": true, 00:33:34.148 "num_base_bdevs": 4, 00:33:34.148 "num_base_bdevs_discovered": 3, 00:33:34.148 "num_base_bdevs_operational": 3, 00:33:34.148 "base_bdevs_list": [ 00:33:34.148 { 00:33:34.148 "name": null, 00:33:34.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.148 "is_configured": false, 00:33:34.148 "data_offset": 0, 00:33:34.148 "data_size": 63488 00:33:34.148 }, 00:33:34.148 { 00:33:34.148 "name": "BaseBdev2", 00:33:34.148 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:34.148 "is_configured": true, 00:33:34.148 "data_offset": 2048, 00:33:34.148 "data_size": 63488 00:33:34.148 }, 00:33:34.148 { 00:33:34.148 "name": "BaseBdev3", 00:33:34.148 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:34.148 "is_configured": true, 00:33:34.148 "data_offset": 2048, 00:33:34.148 "data_size": 63488 00:33:34.148 }, 00:33:34.148 { 00:33:34.148 "name": "BaseBdev4", 00:33:34.148 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:34.148 "is_configured": true, 00:33:34.148 "data_offset": 2048, 00:33:34.148 "data_size": 63488 00:33:34.148 } 00:33:34.148 ] 00:33:34.148 }' 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:34.148 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:34.715 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:34.715 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.715 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:34.715 [2024-10-28 13:44:48.652321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:34.715 [2024-10-28 13:44:48.652564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:34.715 [2024-10-28 13:44:48.652612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:33:34.715 [2024-10-28 13:44:48.652633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:34.715 [2024-10-28 13:44:48.653270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:34.715 [2024-10-28 13:44:48.653303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:34.715 [2024-10-28 13:44:48.653417] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:34.715 [2024-10-28 13:44:48.653445] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:34.715 [2024-10-28 13:44:48.653459] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:34.715 [2024-10-28 13:44:48.653515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:34.715 [2024-10-28 13:44:48.659383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049780 00:33:34.715 spare 00:33:34.715 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.715 13:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:33:34.715 [2024-10-28 13:44:48.662658] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:35.652 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:35.652 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:35.652 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:35.652 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:35.652 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:35.652 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:35.652 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.652 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:35.652 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:35.652 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.652 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:35.652 "name": "raid_bdev1", 00:33:35.652 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:35.652 "strip_size_kb": 64, 00:33:35.652 "state": "online", 00:33:35.652 "raid_level": "raid5f", 00:33:35.652 "superblock": true, 00:33:35.652 "num_base_bdevs": 4, 00:33:35.652 "num_base_bdevs_discovered": 4, 00:33:35.652 "num_base_bdevs_operational": 4, 00:33:35.652 "process": { 00:33:35.652 "type": "rebuild", 00:33:35.652 "target": "spare", 00:33:35.652 "progress": { 00:33:35.652 "blocks": 19200, 00:33:35.652 "percent": 10 00:33:35.652 } 00:33:35.652 }, 00:33:35.652 "base_bdevs_list": [ 00:33:35.652 { 00:33:35.652 "name": "spare", 00:33:35.652 "uuid": "d52f280e-0cb7-519a-8e11-c1eecedae0f0", 00:33:35.652 "is_configured": true, 00:33:35.652 "data_offset": 2048, 00:33:35.652 "data_size": 63488 00:33:35.652 }, 00:33:35.652 { 00:33:35.652 "name": "BaseBdev2", 00:33:35.652 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:35.652 "is_configured": true, 00:33:35.652 "data_offset": 2048, 00:33:35.652 "data_size": 63488 00:33:35.652 }, 00:33:35.652 { 00:33:35.652 "name": "BaseBdev3", 00:33:35.652 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:35.652 "is_configured": true, 00:33:35.652 "data_offset": 2048, 00:33:35.652 "data_size": 63488 00:33:35.652 }, 00:33:35.652 { 00:33:35.652 "name": "BaseBdev4", 00:33:35.652 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:35.652 "is_configured": true, 00:33:35.652 "data_offset": 2048, 00:33:35.653 "data_size": 63488 00:33:35.653 } 00:33:35.653 ] 00:33:35.653 }' 00:33:35.653 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:35.653 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:35.653 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:35.910 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:35.910 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:33:35.910 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.910 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:35.910 [2024-10-28 13:44:49.820675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:35.910 [2024-10-28 13:44:49.874114] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:35.910 [2024-10-28 13:44:49.874262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:35.910 [2024-10-28 13:44:49.874300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:35.910 [2024-10-28 13:44:49.874313] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:35.910 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.910 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:35.911 "name": "raid_bdev1", 00:33:35.911 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:35.911 "strip_size_kb": 64, 00:33:35.911 "state": "online", 00:33:35.911 "raid_level": "raid5f", 00:33:35.911 "superblock": true, 00:33:35.911 "num_base_bdevs": 4, 00:33:35.911 "num_base_bdevs_discovered": 3, 00:33:35.911 "num_base_bdevs_operational": 3, 00:33:35.911 "base_bdevs_list": [ 00:33:35.911 { 00:33:35.911 "name": null, 00:33:35.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.911 "is_configured": false, 00:33:35.911 "data_offset": 0, 00:33:35.911 "data_size": 63488 00:33:35.911 }, 00:33:35.911 { 00:33:35.911 "name": "BaseBdev2", 00:33:35.911 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:35.911 "is_configured": true, 00:33:35.911 "data_offset": 2048, 00:33:35.911 "data_size": 63488 00:33:35.911 }, 00:33:35.911 { 00:33:35.911 "name": "BaseBdev3", 00:33:35.911 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:35.911 "is_configured": true, 00:33:35.911 "data_offset": 2048, 00:33:35.911 "data_size": 63488 00:33:35.911 }, 00:33:35.911 { 00:33:35.911 "name": "BaseBdev4", 00:33:35.911 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:35.911 "is_configured": true, 00:33:35.911 "data_offset": 2048, 00:33:35.911 "data_size": 63488 00:33:35.911 } 00:33:35.911 ] 00:33:35.911 }' 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:35.911 13:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:36.477 "name": "raid_bdev1", 00:33:36.477 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:36.477 "strip_size_kb": 64, 00:33:36.477 "state": "online", 00:33:36.477 "raid_level": "raid5f", 00:33:36.477 "superblock": true, 00:33:36.477 "num_base_bdevs": 4, 00:33:36.477 "num_base_bdevs_discovered": 3, 00:33:36.477 "num_base_bdevs_operational": 3, 00:33:36.477 "base_bdevs_list": [ 00:33:36.477 { 00:33:36.477 "name": null, 00:33:36.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:36.477 "is_configured": false, 00:33:36.477 "data_offset": 0, 00:33:36.477 "data_size": 63488 00:33:36.477 }, 00:33:36.477 { 00:33:36.477 "name": "BaseBdev2", 00:33:36.477 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:36.477 "is_configured": true, 00:33:36.477 "data_offset": 2048, 00:33:36.477 "data_size": 63488 00:33:36.477 }, 00:33:36.477 { 00:33:36.477 "name": "BaseBdev3", 00:33:36.477 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:36.477 "is_configured": true, 00:33:36.477 "data_offset": 2048, 00:33:36.477 "data_size": 63488 00:33:36.477 }, 00:33:36.477 { 00:33:36.477 "name": "BaseBdev4", 00:33:36.477 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:36.477 "is_configured": true, 00:33:36.477 "data_offset": 2048, 00:33:36.477 "data_size": 63488 00:33:36.477 } 00:33:36.477 ] 00:33:36.477 }' 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:36.477 [2024-10-28 13:44:50.569249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:36.477 [2024-10-28 13:44:50.569314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:36.477 [2024-10-28 13:44:50.569347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:33:36.477 [2024-10-28 13:44:50.569366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:36.477 [2024-10-28 13:44:50.569874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:36.477 [2024-10-28 13:44:50.569905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:36.477 [2024-10-28 13:44:50.569998] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:36.477 [2024-10-28 13:44:50.570017] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:36.477 [2024-10-28 13:44:50.570030] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:36.477 [2024-10-28 13:44:50.570042] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:33:36.477 BaseBdev1 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.477 13:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:33:37.432 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:37.432 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:37.432 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:37.432 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:37.432 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:37.432 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:37.432 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:37.432 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:37.432 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:37.432 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:37.432 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:37.432 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:37.432 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.432 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:37.690 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.690 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:37.690 "name": "raid_bdev1", 00:33:37.690 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:37.690 "strip_size_kb": 64, 00:33:37.690 "state": "online", 00:33:37.690 "raid_level": "raid5f", 00:33:37.690 "superblock": true, 00:33:37.690 "num_base_bdevs": 4, 00:33:37.690 "num_base_bdevs_discovered": 3, 00:33:37.690 "num_base_bdevs_operational": 3, 00:33:37.690 "base_bdevs_list": [ 00:33:37.690 { 00:33:37.690 "name": null, 00:33:37.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:37.690 "is_configured": false, 00:33:37.690 "data_offset": 0, 00:33:37.690 "data_size": 63488 00:33:37.690 }, 00:33:37.690 { 00:33:37.690 "name": "BaseBdev2", 00:33:37.690 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:37.690 "is_configured": true, 00:33:37.690 "data_offset": 2048, 00:33:37.690 "data_size": 63488 00:33:37.690 }, 00:33:37.690 { 00:33:37.690 "name": "BaseBdev3", 00:33:37.690 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:37.690 "is_configured": true, 00:33:37.690 "data_offset": 2048, 00:33:37.690 "data_size": 63488 00:33:37.690 }, 00:33:37.690 { 00:33:37.690 "name": "BaseBdev4", 00:33:37.690 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:37.690 "is_configured": true, 00:33:37.690 "data_offset": 2048, 00:33:37.690 "data_size": 63488 00:33:37.690 } 00:33:37.690 ] 00:33:37.690 }' 00:33:37.690 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:37.690 13:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:37.949 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:37.949 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:37.949 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:37.949 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:37.949 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:37.949 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:37.949 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:37.949 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.949 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:37.949 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.207 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:38.207 "name": "raid_bdev1", 00:33:38.207 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:38.207 "strip_size_kb": 64, 00:33:38.207 "state": "online", 00:33:38.207 "raid_level": "raid5f", 00:33:38.207 "superblock": true, 00:33:38.207 "num_base_bdevs": 4, 00:33:38.207 "num_base_bdevs_discovered": 3, 00:33:38.207 "num_base_bdevs_operational": 3, 00:33:38.207 "base_bdevs_list": [ 00:33:38.207 { 00:33:38.207 "name": null, 00:33:38.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:38.207 "is_configured": false, 00:33:38.208 "data_offset": 0, 00:33:38.208 "data_size": 63488 00:33:38.208 }, 00:33:38.208 { 00:33:38.208 "name": "BaseBdev2", 00:33:38.208 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:38.208 "is_configured": true, 00:33:38.208 "data_offset": 2048, 00:33:38.208 "data_size": 63488 00:33:38.208 }, 00:33:38.208 { 00:33:38.208 "name": "BaseBdev3", 00:33:38.208 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:38.208 "is_configured": true, 00:33:38.208 "data_offset": 2048, 00:33:38.208 "data_size": 63488 00:33:38.208 }, 00:33:38.208 { 00:33:38.208 "name": "BaseBdev4", 00:33:38.208 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:38.208 "is_configured": true, 00:33:38.208 "data_offset": 2048, 00:33:38.208 "data_size": 63488 00:33:38.208 } 00:33:38.208 ] 00:33:38.208 }' 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:38.208 [2024-10-28 13:44:52.233786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:38.208 [2024-10-28 13:44:52.234165] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:38.208 [2024-10-28 13:44:52.234207] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:38.208 request: 00:33:38.208 { 00:33:38.208 "base_bdev": "BaseBdev1", 00:33:38.208 "raid_bdev": "raid_bdev1", 00:33:38.208 "method": "bdev_raid_add_base_bdev", 00:33:38.208 "req_id": 1 00:33:38.208 } 00:33:38.208 Got JSON-RPC error response 00:33:38.208 response: 00:33:38.208 { 00:33:38.208 "code": -22, 00:33:38.208 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:38.208 } 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:38.208 13:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:39.143 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.401 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:39.401 "name": "raid_bdev1", 00:33:39.401 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:39.401 "strip_size_kb": 64, 00:33:39.401 "state": "online", 00:33:39.401 "raid_level": "raid5f", 00:33:39.401 "superblock": true, 00:33:39.401 "num_base_bdevs": 4, 00:33:39.401 "num_base_bdevs_discovered": 3, 00:33:39.401 "num_base_bdevs_operational": 3, 00:33:39.401 "base_bdevs_list": [ 00:33:39.401 { 00:33:39.401 "name": null, 00:33:39.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.401 "is_configured": false, 00:33:39.401 "data_offset": 0, 00:33:39.401 "data_size": 63488 00:33:39.401 }, 00:33:39.401 { 00:33:39.401 "name": "BaseBdev2", 00:33:39.401 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:39.401 "is_configured": true, 00:33:39.401 "data_offset": 2048, 00:33:39.401 "data_size": 63488 00:33:39.401 }, 00:33:39.401 { 00:33:39.401 "name": "BaseBdev3", 00:33:39.401 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:39.401 "is_configured": true, 00:33:39.401 "data_offset": 2048, 00:33:39.401 "data_size": 63488 00:33:39.401 }, 00:33:39.401 { 00:33:39.401 "name": "BaseBdev4", 00:33:39.401 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:39.401 "is_configured": true, 00:33:39.401 "data_offset": 2048, 00:33:39.401 "data_size": 63488 00:33:39.401 } 00:33:39.401 ] 00:33:39.401 }' 00:33:39.401 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:39.401 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:39.658 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:39.658 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:39.658 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:39.658 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:39.658 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:39.658 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:39.658 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.658 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:39.658 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:39.658 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.658 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:39.658 "name": "raid_bdev1", 00:33:39.658 "uuid": "266efedd-0dea-421e-bed3-9d8bdadf55d5", 00:33:39.658 "strip_size_kb": 64, 00:33:39.658 "state": "online", 00:33:39.658 "raid_level": "raid5f", 00:33:39.658 "superblock": true, 00:33:39.658 "num_base_bdevs": 4, 00:33:39.658 "num_base_bdevs_discovered": 3, 00:33:39.658 "num_base_bdevs_operational": 3, 00:33:39.658 "base_bdevs_list": [ 00:33:39.658 { 00:33:39.658 "name": null, 00:33:39.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.658 "is_configured": false, 00:33:39.658 "data_offset": 0, 00:33:39.658 "data_size": 63488 00:33:39.658 }, 00:33:39.658 { 00:33:39.658 "name": "BaseBdev2", 00:33:39.658 "uuid": "5ae04e90-40cb-598a-ab92-e6e6308e1fad", 00:33:39.658 "is_configured": true, 00:33:39.658 "data_offset": 2048, 00:33:39.658 "data_size": 63488 00:33:39.658 }, 00:33:39.658 { 00:33:39.658 "name": "BaseBdev3", 00:33:39.658 "uuid": "032b39a7-93b8-578e-a3b5-4a5c48b7ff8a", 00:33:39.658 "is_configured": true, 00:33:39.658 "data_offset": 2048, 00:33:39.658 "data_size": 63488 00:33:39.659 }, 00:33:39.659 { 00:33:39.659 "name": "BaseBdev4", 00:33:39.659 "uuid": "77776c84-1c9a-5ee6-9d22-c83ebcbd642e", 00:33:39.659 "is_configured": true, 00:33:39.659 "data_offset": 2048, 00:33:39.659 "data_size": 63488 00:33:39.659 } 00:33:39.659 ] 00:33:39.659 }' 00:33:39.659 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:39.916 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:39.916 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:39.916 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:39.916 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 97818 00:33:39.916 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 97818 ']' 00:33:39.916 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 97818 00:33:39.916 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:33:39.916 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:39.916 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97818 00:33:39.916 killing process with pid 97818 00:33:39.916 Received shutdown signal, test time was about 60.000000 seconds 00:33:39.916 00:33:39.916 Latency(us) 00:33:39.916 [2024-10-28T13:44:54.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.916 [2024-10-28T13:44:54.076Z] =================================================================================================================== 00:33:39.916 [2024-10-28T13:44:54.076Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:39.916 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:39.916 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:39.916 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97818' 00:33:39.916 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 97818 00:33:39.916 [2024-10-28 13:44:53.933469] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:39.916 13:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 97818 00:33:39.916 [2024-10-28 13:44:53.933635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:39.917 [2024-10-28 13:44:53.933735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:39.917 [2024-10-28 13:44:53.933761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:39.917 [2024-10-28 13:44:53.984033] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:40.173 13:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:33:40.173 00:33:40.173 real 0m26.959s 00:33:40.173 user 0m35.555s 00:33:40.173 sys 0m2.783s 00:33:40.173 ************************************ 00:33:40.173 END TEST raid5f_rebuild_test_sb 00:33:40.173 13:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:40.173 13:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:40.173 ************************************ 00:33:40.173 13:44:54 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:33:40.173 13:44:54 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:33:40.173 13:44:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:33:40.173 13:44:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:40.173 13:44:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:40.173 ************************************ 00:33:40.174 START TEST raid_state_function_test_sb_4k 00:33:40.174 ************************************ 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=98622 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98622' 00:33:40.174 Process raid pid: 98622 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 98622 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 98622 ']' 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:40.174 13:44:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:40.431 [2024-10-28 13:44:54.388593] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:33:40.431 [2024-10-28 13:44:54.389026] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.431 [2024-10-28 13:44:54.543320] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:40.431 [2024-10-28 13:44:54.574694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.688 [2024-10-28 13:44:54.628225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.688 [2024-10-28 13:44:54.687101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:40.688 [2024-10-28 13:44:54.687402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.253 [2024-10-28 13:44:55.353743] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:41.253 [2024-10-28 13:44:55.353833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:41.253 [2024-10-28 13:44:55.353853] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:41.253 [2024-10-28 13:44:55.353867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:41.253 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.509 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:41.509 "name": "Existed_Raid", 00:33:41.509 "uuid": "29169dc2-074e-4bea-9106-f6c7fe54fa38", 00:33:41.509 "strip_size_kb": 0, 00:33:41.509 "state": "configuring", 00:33:41.509 "raid_level": "raid1", 00:33:41.509 "superblock": true, 00:33:41.509 "num_base_bdevs": 2, 00:33:41.509 "num_base_bdevs_discovered": 0, 00:33:41.509 "num_base_bdevs_operational": 2, 00:33:41.509 "base_bdevs_list": [ 00:33:41.509 { 00:33:41.509 "name": "BaseBdev1", 00:33:41.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.509 "is_configured": false, 00:33:41.509 "data_offset": 0, 00:33:41.509 "data_size": 0 00:33:41.509 }, 00:33:41.509 { 00:33:41.509 "name": "BaseBdev2", 00:33:41.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.509 "is_configured": false, 00:33:41.509 "data_offset": 0, 00:33:41.509 "data_size": 0 00:33:41.509 } 00:33:41.509 ] 00:33:41.509 }' 00:33:41.509 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:41.509 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.767 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:41.767 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.767 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.767 [2024-10-28 13:44:55.869785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:41.768 [2024-10-28 13:44:55.869831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.768 [2024-10-28 13:44:55.877822] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:41.768 [2024-10-28 13:44:55.877869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:41.768 [2024-10-28 13:44:55.877889] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:41.768 [2024-10-28 13:44:55.877917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.768 [2024-10-28 13:44:55.898911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:41.768 BaseBdev1 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.768 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:41.768 [ 00:33:41.768 { 00:33:41.768 "name": "BaseBdev1", 00:33:41.768 "aliases": [ 00:33:41.768 "f44af591-759d-4d75-8381-3c7c0ed1ea5f" 00:33:41.768 ], 00:33:41.768 "product_name": "Malloc disk", 00:33:41.768 "block_size": 4096, 00:33:41.768 "num_blocks": 8192, 00:33:41.768 "uuid": "f44af591-759d-4d75-8381-3c7c0ed1ea5f", 00:33:41.768 "assigned_rate_limits": { 00:33:41.768 "rw_ios_per_sec": 0, 00:33:41.768 "rw_mbytes_per_sec": 0, 00:33:41.768 "r_mbytes_per_sec": 0, 00:33:41.768 "w_mbytes_per_sec": 0 00:33:41.768 }, 00:33:41.768 "claimed": true, 00:33:41.768 "claim_type": "exclusive_write", 00:33:41.768 "zoned": false, 00:33:41.768 "supported_io_types": { 00:33:41.768 "read": true, 00:33:41.768 "write": true, 00:33:42.026 "unmap": true, 00:33:42.026 "flush": true, 00:33:42.026 "reset": true, 00:33:42.026 "nvme_admin": false, 00:33:42.026 "nvme_io": false, 00:33:42.026 "nvme_io_md": false, 00:33:42.026 "write_zeroes": true, 00:33:42.026 "zcopy": true, 00:33:42.026 "get_zone_info": false, 00:33:42.026 "zone_management": false, 00:33:42.026 "zone_append": false, 00:33:42.026 "compare": false, 00:33:42.026 "compare_and_write": false, 00:33:42.026 "abort": true, 00:33:42.026 "seek_hole": false, 00:33:42.026 "seek_data": false, 00:33:42.026 "copy": true, 00:33:42.026 "nvme_iov_md": false 00:33:42.026 }, 00:33:42.026 "memory_domains": [ 00:33:42.026 { 00:33:42.026 "dma_device_id": "system", 00:33:42.026 "dma_device_type": 1 00:33:42.026 }, 00:33:42.026 { 00:33:42.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:42.026 "dma_device_type": 2 00:33:42.026 } 00:33:42.026 ], 00:33:42.026 "driver_specific": {} 00:33:42.026 } 00:33:42.026 ] 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:42.026 "name": "Existed_Raid", 00:33:42.026 "uuid": "48b4057f-2760-425b-aa52-41b26eb029a5", 00:33:42.026 "strip_size_kb": 0, 00:33:42.026 "state": "configuring", 00:33:42.026 "raid_level": "raid1", 00:33:42.026 "superblock": true, 00:33:42.026 "num_base_bdevs": 2, 00:33:42.026 "num_base_bdevs_discovered": 1, 00:33:42.026 "num_base_bdevs_operational": 2, 00:33:42.026 "base_bdevs_list": [ 00:33:42.026 { 00:33:42.026 "name": "BaseBdev1", 00:33:42.026 "uuid": "f44af591-759d-4d75-8381-3c7c0ed1ea5f", 00:33:42.026 "is_configured": true, 00:33:42.026 "data_offset": 256, 00:33:42.026 "data_size": 7936 00:33:42.026 }, 00:33:42.026 { 00:33:42.026 "name": "BaseBdev2", 00:33:42.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.026 "is_configured": false, 00:33:42.026 "data_offset": 0, 00:33:42.026 "data_size": 0 00:33:42.026 } 00:33:42.026 ] 00:33:42.026 }' 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:42.026 13:44:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:42.285 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:42.285 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.285 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:42.285 [2024-10-28 13:44:56.435120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:42.285 [2024-10-28 13:44:56.435358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:42.285 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.285 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:33:42.285 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.285 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:42.543 [2024-10-28 13:44:56.443115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:42.543 [2024-10-28 13:44:56.445774] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:42.543 [2024-10-28 13:44:56.445965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:42.543 "name": "Existed_Raid", 00:33:42.543 "uuid": "d85ab25f-19dd-4fe7-80a8-b251e0ebd993", 00:33:42.543 "strip_size_kb": 0, 00:33:42.543 "state": "configuring", 00:33:42.543 "raid_level": "raid1", 00:33:42.543 "superblock": true, 00:33:42.543 "num_base_bdevs": 2, 00:33:42.543 "num_base_bdevs_discovered": 1, 00:33:42.543 "num_base_bdevs_operational": 2, 00:33:42.543 "base_bdevs_list": [ 00:33:42.543 { 00:33:42.543 "name": "BaseBdev1", 00:33:42.543 "uuid": "f44af591-759d-4d75-8381-3c7c0ed1ea5f", 00:33:42.543 "is_configured": true, 00:33:42.543 "data_offset": 256, 00:33:42.543 "data_size": 7936 00:33:42.543 }, 00:33:42.543 { 00:33:42.543 "name": "BaseBdev2", 00:33:42.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.543 "is_configured": false, 00:33:42.543 "data_offset": 0, 00:33:42.543 "data_size": 0 00:33:42.543 } 00:33:42.543 ] 00:33:42.543 }' 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:42.543 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:43.110 [2024-10-28 13:44:56.981415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:43.110 [2024-10-28 13:44:56.981714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:33:43.110 [2024-10-28 13:44:56.981737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:43.110 BaseBdev2 00:33:43.110 [2024-10-28 13:44:56.982080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:33:43.110 [2024-10-28 13:44:56.982328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:33:43.110 [2024-10-28 13:44:56.982396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:33:43.110 [2024-10-28 13:44:56.982582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.110 13:44:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:43.110 [ 00:33:43.110 { 00:33:43.110 "name": "BaseBdev2", 00:33:43.110 "aliases": [ 00:33:43.110 "507981c4-af3f-4866-9d05-e9768e96d15c" 00:33:43.110 ], 00:33:43.110 "product_name": "Malloc disk", 00:33:43.110 "block_size": 4096, 00:33:43.110 "num_blocks": 8192, 00:33:43.110 "uuid": "507981c4-af3f-4866-9d05-e9768e96d15c", 00:33:43.110 "assigned_rate_limits": { 00:33:43.110 "rw_ios_per_sec": 0, 00:33:43.110 "rw_mbytes_per_sec": 0, 00:33:43.110 "r_mbytes_per_sec": 0, 00:33:43.110 "w_mbytes_per_sec": 0 00:33:43.110 }, 00:33:43.110 "claimed": true, 00:33:43.110 "claim_type": "exclusive_write", 00:33:43.110 "zoned": false, 00:33:43.110 "supported_io_types": { 00:33:43.110 "read": true, 00:33:43.110 "write": true, 00:33:43.110 "unmap": true, 00:33:43.110 "flush": true, 00:33:43.110 "reset": true, 00:33:43.110 "nvme_admin": false, 00:33:43.110 "nvme_io": false, 00:33:43.110 "nvme_io_md": false, 00:33:43.110 "write_zeroes": true, 00:33:43.110 "zcopy": true, 00:33:43.110 "get_zone_info": false, 00:33:43.110 "zone_management": false, 00:33:43.110 "zone_append": false, 00:33:43.110 "compare": false, 00:33:43.110 "compare_and_write": false, 00:33:43.110 "abort": true, 00:33:43.110 "seek_hole": false, 00:33:43.110 "seek_data": false, 00:33:43.110 "copy": true, 00:33:43.110 "nvme_iov_md": false 00:33:43.110 }, 00:33:43.110 "memory_domains": [ 00:33:43.110 { 00:33:43.110 "dma_device_id": "system", 00:33:43.110 "dma_device_type": 1 00:33:43.110 }, 00:33:43.110 { 00:33:43.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:43.111 "dma_device_type": 2 00:33:43.111 } 00:33:43.111 ], 00:33:43.111 "driver_specific": {} 00:33:43.111 } 00:33:43.111 ] 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:43.111 "name": "Existed_Raid", 00:33:43.111 "uuid": "d85ab25f-19dd-4fe7-80a8-b251e0ebd993", 00:33:43.111 "strip_size_kb": 0, 00:33:43.111 "state": "online", 00:33:43.111 "raid_level": "raid1", 00:33:43.111 "superblock": true, 00:33:43.111 "num_base_bdevs": 2, 00:33:43.111 "num_base_bdevs_discovered": 2, 00:33:43.111 "num_base_bdevs_operational": 2, 00:33:43.111 "base_bdevs_list": [ 00:33:43.111 { 00:33:43.111 "name": "BaseBdev1", 00:33:43.111 "uuid": "f44af591-759d-4d75-8381-3c7c0ed1ea5f", 00:33:43.111 "is_configured": true, 00:33:43.111 "data_offset": 256, 00:33:43.111 "data_size": 7936 00:33:43.111 }, 00:33:43.111 { 00:33:43.111 "name": "BaseBdev2", 00:33:43.111 "uuid": "507981c4-af3f-4866-9d05-e9768e96d15c", 00:33:43.111 "is_configured": true, 00:33:43.111 "data_offset": 256, 00:33:43.111 "data_size": 7936 00:33:43.111 } 00:33:43.111 ] 00:33:43.111 }' 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:43.111 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:43.678 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:43.678 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:43.678 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:43.678 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:43.678 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:33:43.678 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:43.678 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:43.678 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:43.678 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.678 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:43.678 [2024-10-28 13:44:57.562024] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:43.678 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.678 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:43.678 "name": "Existed_Raid", 00:33:43.678 "aliases": [ 00:33:43.678 "d85ab25f-19dd-4fe7-80a8-b251e0ebd993" 00:33:43.678 ], 00:33:43.678 "product_name": "Raid Volume", 00:33:43.678 "block_size": 4096, 00:33:43.678 "num_blocks": 7936, 00:33:43.678 "uuid": "d85ab25f-19dd-4fe7-80a8-b251e0ebd993", 00:33:43.678 "assigned_rate_limits": { 00:33:43.678 "rw_ios_per_sec": 0, 00:33:43.678 "rw_mbytes_per_sec": 0, 00:33:43.678 "r_mbytes_per_sec": 0, 00:33:43.678 "w_mbytes_per_sec": 0 00:33:43.678 }, 00:33:43.678 "claimed": false, 00:33:43.678 "zoned": false, 00:33:43.678 "supported_io_types": { 00:33:43.678 "read": true, 00:33:43.678 "write": true, 00:33:43.678 "unmap": false, 00:33:43.678 "flush": false, 00:33:43.678 "reset": true, 00:33:43.678 "nvme_admin": false, 00:33:43.678 "nvme_io": false, 00:33:43.678 "nvme_io_md": false, 00:33:43.678 "write_zeroes": true, 00:33:43.678 "zcopy": false, 00:33:43.678 "get_zone_info": false, 00:33:43.678 "zone_management": false, 00:33:43.678 "zone_append": false, 00:33:43.678 "compare": false, 00:33:43.678 "compare_and_write": false, 00:33:43.678 "abort": false, 00:33:43.678 "seek_hole": false, 00:33:43.678 "seek_data": false, 00:33:43.678 "copy": false, 00:33:43.678 "nvme_iov_md": false 00:33:43.678 }, 00:33:43.678 "memory_domains": [ 00:33:43.678 { 00:33:43.678 "dma_device_id": "system", 00:33:43.678 "dma_device_type": 1 00:33:43.678 }, 00:33:43.678 { 00:33:43.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:43.678 "dma_device_type": 2 00:33:43.678 }, 00:33:43.678 { 00:33:43.678 "dma_device_id": "system", 00:33:43.678 "dma_device_type": 1 00:33:43.678 }, 00:33:43.678 { 00:33:43.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:43.678 "dma_device_type": 2 00:33:43.678 } 00:33:43.678 ], 00:33:43.678 "driver_specific": { 00:33:43.678 "raid": { 00:33:43.678 "uuid": "d85ab25f-19dd-4fe7-80a8-b251e0ebd993", 00:33:43.678 "strip_size_kb": 0, 00:33:43.678 "state": "online", 00:33:43.678 "raid_level": "raid1", 00:33:43.678 "superblock": true, 00:33:43.678 "num_base_bdevs": 2, 00:33:43.678 "num_base_bdevs_discovered": 2, 00:33:43.678 "num_base_bdevs_operational": 2, 00:33:43.678 "base_bdevs_list": [ 00:33:43.678 { 00:33:43.678 "name": "BaseBdev1", 00:33:43.678 "uuid": "f44af591-759d-4d75-8381-3c7c0ed1ea5f", 00:33:43.678 "is_configured": true, 00:33:43.678 "data_offset": 256, 00:33:43.678 "data_size": 7936 00:33:43.678 }, 00:33:43.678 { 00:33:43.678 "name": "BaseBdev2", 00:33:43.678 "uuid": "507981c4-af3f-4866-9d05-e9768e96d15c", 00:33:43.678 "is_configured": true, 00:33:43.678 "data_offset": 256, 00:33:43.678 "data_size": 7936 00:33:43.678 } 00:33:43.678 ] 00:33:43.679 } 00:33:43.679 } 00:33:43.679 }' 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:43.679 BaseBdev2' 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.679 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:43.679 [2024-10-28 13:44:57.821779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:43.937 "name": "Existed_Raid", 00:33:43.937 "uuid": "d85ab25f-19dd-4fe7-80a8-b251e0ebd993", 00:33:43.937 "strip_size_kb": 0, 00:33:43.937 "state": "online", 00:33:43.937 "raid_level": "raid1", 00:33:43.937 "superblock": true, 00:33:43.937 "num_base_bdevs": 2, 00:33:43.937 "num_base_bdevs_discovered": 1, 00:33:43.937 "num_base_bdevs_operational": 1, 00:33:43.937 "base_bdevs_list": [ 00:33:43.937 { 00:33:43.937 "name": null, 00:33:43.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.937 "is_configured": false, 00:33:43.937 "data_offset": 0, 00:33:43.937 "data_size": 7936 00:33:43.937 }, 00:33:43.937 { 00:33:43.937 "name": "BaseBdev2", 00:33:43.937 "uuid": "507981c4-af3f-4866-9d05-e9768e96d15c", 00:33:43.937 "is_configured": true, 00:33:43.937 "data_offset": 256, 00:33:43.937 "data_size": 7936 00:33:43.937 } 00:33:43.937 ] 00:33:43.937 }' 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:43.937 13:44:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:44.194 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:44.194 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:44.450 [2024-10-28 13:44:58.409769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:44.450 [2024-10-28 13:44:58.409898] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:44.450 [2024-10-28 13:44:58.421523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:44.450 [2024-10-28 13:44:58.421611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:44.450 [2024-10-28 13:44:58.421627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:44.450 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:33:44.451 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 98622 00:33:44.451 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 98622 ']' 00:33:44.451 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 98622 00:33:44.451 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:33:44.451 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:44.451 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98622 00:33:44.451 killing process with pid 98622 00:33:44.451 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:44.451 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:44.451 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98622' 00:33:44.451 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 98622 00:33:44.451 [2024-10-28 13:44:58.511536] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:44.451 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 98622 00:33:44.451 [2024-10-28 13:44:58.512870] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:44.709 ************************************ 00:33:44.709 END TEST raid_state_function_test_sb_4k 00:33:44.709 ************************************ 00:33:44.709 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:33:44.709 00:33:44.709 real 0m4.478s 00:33:44.709 user 0m7.333s 00:33:44.709 sys 0m0.740s 00:33:44.709 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:44.709 13:44:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:44.709 13:44:58 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:33:44.709 13:44:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:44.709 13:44:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:44.709 13:44:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:44.709 ************************************ 00:33:44.709 START TEST raid_superblock_test_4k 00:33:44.709 ************************************ 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:33:44.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=98869 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 98869 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 98869 ']' 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:44.709 13:44:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:44.967 [2024-10-28 13:44:58.909230] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:33:44.967 [2024-10-28 13:44:58.909645] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98869 ] 00:33:44.967 [2024-10-28 13:44:59.062442] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:44.967 [2024-10-28 13:44:59.093433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.224 [2024-10-28 13:44:59.143440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.224 [2024-10-28 13:44:59.203641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:45.224 [2024-10-28 13:44:59.203992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:45.789 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:45.789 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:33:45.789 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:33:45.789 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:45.789 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:33:45.789 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:33:45.789 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:45.789 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:45.789 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:45.789 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:45.789 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:33:45.789 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.789 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.048 malloc1 00:33:46.048 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.048 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:46.048 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.048 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.048 [2024-10-28 13:44:59.954732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:46.048 [2024-10-28 13:44:59.954820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:46.048 [2024-10-28 13:44:59.954861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:46.048 [2024-10-28 13:44:59.954885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:46.048 [2024-10-28 13:44:59.957867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:46.048 [2024-10-28 13:44:59.957910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:46.048 pt1 00:33:46.048 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.048 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:46.048 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:46.048 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:33:46.048 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:33:46.048 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:46.048 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:46.048 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:46.049 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:46.049 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:33:46.049 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.049 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.049 malloc2 00:33:46.049 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.049 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:46.049 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.049 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.049 [2024-10-28 13:44:59.986925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:46.049 [2024-10-28 13:44:59.987002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:46.049 [2024-10-28 13:44:59.987031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:46.049 [2024-10-28 13:44:59.987045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:46.049 [2024-10-28 13:44:59.989907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:46.049 [2024-10-28 13:44:59.989950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:46.049 pt2 00:33:46.049 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.049 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:46.049 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:46.049 13:44:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:33:46.049 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.049 13:44:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.049 [2024-10-28 13:44:59.998969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:46.049 [2024-10-28 13:45:00.001479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:46.049 [2024-10-28 13:45:00.001683] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:33:46.049 [2024-10-28 13:45:00.001702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:46.049 [2024-10-28 13:45:00.002039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:33:46.049 [2024-10-28 13:45:00.002301] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:33:46.049 [2024-10-28 13:45:00.002333] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:33:46.049 [2024-10-28 13:45:00.002501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:46.049 "name": "raid_bdev1", 00:33:46.049 "uuid": "0c75df8d-af83-4f34-90c8-b6af5bdacc50", 00:33:46.049 "strip_size_kb": 0, 00:33:46.049 "state": "online", 00:33:46.049 "raid_level": "raid1", 00:33:46.049 "superblock": true, 00:33:46.049 "num_base_bdevs": 2, 00:33:46.049 "num_base_bdevs_discovered": 2, 00:33:46.049 "num_base_bdevs_operational": 2, 00:33:46.049 "base_bdevs_list": [ 00:33:46.049 { 00:33:46.049 "name": "pt1", 00:33:46.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:46.049 "is_configured": true, 00:33:46.049 "data_offset": 256, 00:33:46.049 "data_size": 7936 00:33:46.049 }, 00:33:46.049 { 00:33:46.049 "name": "pt2", 00:33:46.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:46.049 "is_configured": true, 00:33:46.049 "data_offset": 256, 00:33:46.049 "data_size": 7936 00:33:46.049 } 00:33:46.049 ] 00:33:46.049 }' 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:46.049 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:46.616 [2024-10-28 13:45:00.511529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:46.616 "name": "raid_bdev1", 00:33:46.616 "aliases": [ 00:33:46.616 "0c75df8d-af83-4f34-90c8-b6af5bdacc50" 00:33:46.616 ], 00:33:46.616 "product_name": "Raid Volume", 00:33:46.616 "block_size": 4096, 00:33:46.616 "num_blocks": 7936, 00:33:46.616 "uuid": "0c75df8d-af83-4f34-90c8-b6af5bdacc50", 00:33:46.616 "assigned_rate_limits": { 00:33:46.616 "rw_ios_per_sec": 0, 00:33:46.616 "rw_mbytes_per_sec": 0, 00:33:46.616 "r_mbytes_per_sec": 0, 00:33:46.616 "w_mbytes_per_sec": 0 00:33:46.616 }, 00:33:46.616 "claimed": false, 00:33:46.616 "zoned": false, 00:33:46.616 "supported_io_types": { 00:33:46.616 "read": true, 00:33:46.616 "write": true, 00:33:46.616 "unmap": false, 00:33:46.616 "flush": false, 00:33:46.616 "reset": true, 00:33:46.616 "nvme_admin": false, 00:33:46.616 "nvme_io": false, 00:33:46.616 "nvme_io_md": false, 00:33:46.616 "write_zeroes": true, 00:33:46.616 "zcopy": false, 00:33:46.616 "get_zone_info": false, 00:33:46.616 "zone_management": false, 00:33:46.616 "zone_append": false, 00:33:46.616 "compare": false, 00:33:46.616 "compare_and_write": false, 00:33:46.616 "abort": false, 00:33:46.616 "seek_hole": false, 00:33:46.616 "seek_data": false, 00:33:46.616 "copy": false, 00:33:46.616 "nvme_iov_md": false 00:33:46.616 }, 00:33:46.616 "memory_domains": [ 00:33:46.616 { 00:33:46.616 "dma_device_id": "system", 00:33:46.616 "dma_device_type": 1 00:33:46.616 }, 00:33:46.616 { 00:33:46.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:46.616 "dma_device_type": 2 00:33:46.616 }, 00:33:46.616 { 00:33:46.616 "dma_device_id": "system", 00:33:46.616 "dma_device_type": 1 00:33:46.616 }, 00:33:46.616 { 00:33:46.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:46.616 "dma_device_type": 2 00:33:46.616 } 00:33:46.616 ], 00:33:46.616 "driver_specific": { 00:33:46.616 "raid": { 00:33:46.616 "uuid": "0c75df8d-af83-4f34-90c8-b6af5bdacc50", 00:33:46.616 "strip_size_kb": 0, 00:33:46.616 "state": "online", 00:33:46.616 "raid_level": "raid1", 00:33:46.616 "superblock": true, 00:33:46.616 "num_base_bdevs": 2, 00:33:46.616 "num_base_bdevs_discovered": 2, 00:33:46.616 "num_base_bdevs_operational": 2, 00:33:46.616 "base_bdevs_list": [ 00:33:46.616 { 00:33:46.616 "name": "pt1", 00:33:46.616 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:46.616 "is_configured": true, 00:33:46.616 "data_offset": 256, 00:33:46.616 "data_size": 7936 00:33:46.616 }, 00:33:46.616 { 00:33:46.616 "name": "pt2", 00:33:46.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:46.616 "is_configured": true, 00:33:46.616 "data_offset": 256, 00:33:46.616 "data_size": 7936 00:33:46.616 } 00:33:46.616 ] 00:33:46.616 } 00:33:46.616 } 00:33:46.616 }' 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:46.616 pt2' 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:46.616 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.617 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:46.617 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.617 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.617 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:33:46.617 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:33:46.617 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:46.617 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:46.617 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:46.617 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.617 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.617 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:33:46.875 [2024-10-28 13:45:00.787470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0c75df8d-af83-4f34-90c8-b6af5bdacc50 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 0c75df8d-af83-4f34-90c8-b6af5bdacc50 ']' 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.875 [2024-10-28 13:45:00.835127] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:46.875 [2024-10-28 13:45:00.835175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:46.875 [2024-10-28 13:45:00.835306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:46.875 [2024-10-28 13:45:00.835390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:46.875 [2024-10-28 13:45:00.835443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.875 [2024-10-28 13:45:00.975273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:46.875 [2024-10-28 13:45:00.977796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:46.875 [2024-10-28 13:45:00.977887] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:46.875 [2024-10-28 13:45:00.977974] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:46.875 [2024-10-28 13:45:00.978002] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:46.875 [2024-10-28 13:45:00.978019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:33:46.875 request: 00:33:46.875 { 00:33:46.875 "name": "raid_bdev1", 00:33:46.875 "raid_level": "raid1", 00:33:46.875 "base_bdevs": [ 00:33:46.875 "malloc1", 00:33:46.875 "malloc2" 00:33:46.875 ], 00:33:46.875 "superblock": false, 00:33:46.875 "method": "bdev_raid_create", 00:33:46.875 "req_id": 1 00:33:46.875 } 00:33:46.875 Got JSON-RPC error response 00:33:46.875 response: 00:33:46.875 { 00:33:46.875 "code": -17, 00:33:46.875 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:46.875 } 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:46.875 13:45:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.133 [2024-10-28 13:45:01.039279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:47.133 [2024-10-28 13:45:01.039357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:47.133 [2024-10-28 13:45:01.039386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:47.133 [2024-10-28 13:45:01.039407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:47.133 [2024-10-28 13:45:01.042309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:47.133 [2024-10-28 13:45:01.042361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:47.133 [2024-10-28 13:45:01.042457] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:47.133 [2024-10-28 13:45:01.042522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:47.133 pt1 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:47.133 "name": "raid_bdev1", 00:33:47.133 "uuid": "0c75df8d-af83-4f34-90c8-b6af5bdacc50", 00:33:47.133 "strip_size_kb": 0, 00:33:47.133 "state": "configuring", 00:33:47.133 "raid_level": "raid1", 00:33:47.133 "superblock": true, 00:33:47.133 "num_base_bdevs": 2, 00:33:47.133 "num_base_bdevs_discovered": 1, 00:33:47.133 "num_base_bdevs_operational": 2, 00:33:47.133 "base_bdevs_list": [ 00:33:47.133 { 00:33:47.133 "name": "pt1", 00:33:47.133 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:47.133 "is_configured": true, 00:33:47.133 "data_offset": 256, 00:33:47.133 "data_size": 7936 00:33:47.133 }, 00:33:47.133 { 00:33:47.133 "name": null, 00:33:47.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:47.133 "is_configured": false, 00:33:47.133 "data_offset": 256, 00:33:47.133 "data_size": 7936 00:33:47.133 } 00:33:47.133 ] 00:33:47.133 }' 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:47.133 13:45:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.438 [2024-10-28 13:45:01.563467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:47.438 [2024-10-28 13:45:01.563560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:47.438 [2024-10-28 13:45:01.563596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:47.438 [2024-10-28 13:45:01.563615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:47.438 [2024-10-28 13:45:01.564167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:47.438 [2024-10-28 13:45:01.564206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:47.438 [2024-10-28 13:45:01.564304] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:47.438 [2024-10-28 13:45:01.564341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:47.438 [2024-10-28 13:45:01.564465] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:33:47.438 [2024-10-28 13:45:01.564487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:47.438 [2024-10-28 13:45:01.564780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:47.438 [2024-10-28 13:45:01.564959] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:33:47.438 [2024-10-28 13:45:01.564975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:33:47.438 [2024-10-28 13:45:01.565118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:47.438 pt2 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.438 13:45:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.696 13:45:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.696 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:47.696 "name": "raid_bdev1", 00:33:47.696 "uuid": "0c75df8d-af83-4f34-90c8-b6af5bdacc50", 00:33:47.696 "strip_size_kb": 0, 00:33:47.696 "state": "online", 00:33:47.696 "raid_level": "raid1", 00:33:47.696 "superblock": true, 00:33:47.696 "num_base_bdevs": 2, 00:33:47.696 "num_base_bdevs_discovered": 2, 00:33:47.696 "num_base_bdevs_operational": 2, 00:33:47.696 "base_bdevs_list": [ 00:33:47.696 { 00:33:47.696 "name": "pt1", 00:33:47.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:47.696 "is_configured": true, 00:33:47.696 "data_offset": 256, 00:33:47.696 "data_size": 7936 00:33:47.696 }, 00:33:47.696 { 00:33:47.696 "name": "pt2", 00:33:47.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:47.696 "is_configured": true, 00:33:47.696 "data_offset": 256, 00:33:47.696 "data_size": 7936 00:33:47.696 } 00:33:47.696 ] 00:33:47.696 }' 00:33:47.696 13:45:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:47.696 13:45:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.954 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:33:47.954 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:47.954 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:47.954 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:47.954 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:33:47.954 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:47.955 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:47.955 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:47.955 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.955 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:47.955 [2024-10-28 13:45:02.099967] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:48.213 "name": "raid_bdev1", 00:33:48.213 "aliases": [ 00:33:48.213 "0c75df8d-af83-4f34-90c8-b6af5bdacc50" 00:33:48.213 ], 00:33:48.213 "product_name": "Raid Volume", 00:33:48.213 "block_size": 4096, 00:33:48.213 "num_blocks": 7936, 00:33:48.213 "uuid": "0c75df8d-af83-4f34-90c8-b6af5bdacc50", 00:33:48.213 "assigned_rate_limits": { 00:33:48.213 "rw_ios_per_sec": 0, 00:33:48.213 "rw_mbytes_per_sec": 0, 00:33:48.213 "r_mbytes_per_sec": 0, 00:33:48.213 "w_mbytes_per_sec": 0 00:33:48.213 }, 00:33:48.213 "claimed": false, 00:33:48.213 "zoned": false, 00:33:48.213 "supported_io_types": { 00:33:48.213 "read": true, 00:33:48.213 "write": true, 00:33:48.213 "unmap": false, 00:33:48.213 "flush": false, 00:33:48.213 "reset": true, 00:33:48.213 "nvme_admin": false, 00:33:48.213 "nvme_io": false, 00:33:48.213 "nvme_io_md": false, 00:33:48.213 "write_zeroes": true, 00:33:48.213 "zcopy": false, 00:33:48.213 "get_zone_info": false, 00:33:48.213 "zone_management": false, 00:33:48.213 "zone_append": false, 00:33:48.213 "compare": false, 00:33:48.213 "compare_and_write": false, 00:33:48.213 "abort": false, 00:33:48.213 "seek_hole": false, 00:33:48.213 "seek_data": false, 00:33:48.213 "copy": false, 00:33:48.213 "nvme_iov_md": false 00:33:48.213 }, 00:33:48.213 "memory_domains": [ 00:33:48.213 { 00:33:48.213 "dma_device_id": "system", 00:33:48.213 "dma_device_type": 1 00:33:48.213 }, 00:33:48.213 { 00:33:48.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:48.213 "dma_device_type": 2 00:33:48.213 }, 00:33:48.213 { 00:33:48.213 "dma_device_id": "system", 00:33:48.213 "dma_device_type": 1 00:33:48.213 }, 00:33:48.213 { 00:33:48.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:48.213 "dma_device_type": 2 00:33:48.213 } 00:33:48.213 ], 00:33:48.213 "driver_specific": { 00:33:48.213 "raid": { 00:33:48.213 "uuid": "0c75df8d-af83-4f34-90c8-b6af5bdacc50", 00:33:48.213 "strip_size_kb": 0, 00:33:48.213 "state": "online", 00:33:48.213 "raid_level": "raid1", 00:33:48.213 "superblock": true, 00:33:48.213 "num_base_bdevs": 2, 00:33:48.213 "num_base_bdevs_discovered": 2, 00:33:48.213 "num_base_bdevs_operational": 2, 00:33:48.213 "base_bdevs_list": [ 00:33:48.213 { 00:33:48.213 "name": "pt1", 00:33:48.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:48.213 "is_configured": true, 00:33:48.213 "data_offset": 256, 00:33:48.213 "data_size": 7936 00:33:48.213 }, 00:33:48.213 { 00:33:48.213 "name": "pt2", 00:33:48.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:48.213 "is_configured": true, 00:33:48.213 "data_offset": 256, 00:33:48.213 "data_size": 7936 00:33:48.213 } 00:33:48.213 ] 00:33:48.213 } 00:33:48.213 } 00:33:48.213 }' 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:48.213 pt2' 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:48.213 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.472 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:33:48.472 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:33:48.473 [2024-10-28 13:45:02.380082] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 0c75df8d-af83-4f34-90c8-b6af5bdacc50 '!=' 0c75df8d-af83-4f34-90c8-b6af5bdacc50 ']' 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:48.473 [2024-10-28 13:45:02.431770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:48.473 "name": "raid_bdev1", 00:33:48.473 "uuid": "0c75df8d-af83-4f34-90c8-b6af5bdacc50", 00:33:48.473 "strip_size_kb": 0, 00:33:48.473 "state": "online", 00:33:48.473 "raid_level": "raid1", 00:33:48.473 "superblock": true, 00:33:48.473 "num_base_bdevs": 2, 00:33:48.473 "num_base_bdevs_discovered": 1, 00:33:48.473 "num_base_bdevs_operational": 1, 00:33:48.473 "base_bdevs_list": [ 00:33:48.473 { 00:33:48.473 "name": null, 00:33:48.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:48.473 "is_configured": false, 00:33:48.473 "data_offset": 0, 00:33:48.473 "data_size": 7936 00:33:48.473 }, 00:33:48.473 { 00:33:48.473 "name": "pt2", 00:33:48.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:48.473 "is_configured": true, 00:33:48.473 "data_offset": 256, 00:33:48.473 "data_size": 7936 00:33:48.473 } 00:33:48.473 ] 00:33:48.473 }' 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:48.473 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:49.041 13:45:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:49.041 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.041 13:45:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:49.041 [2024-10-28 13:45:02.999866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:49.041 [2024-10-28 13:45:03.000091] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:49.041 [2024-10-28 13:45:03.000359] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:49.041 [2024-10-28 13:45:03.000449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:49.041 [2024-10-28 13:45:03.000472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:49.041 [2024-10-28 13:45:03.083919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:49.041 [2024-10-28 13:45:03.084055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:49.041 [2024-10-28 13:45:03.084098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:49.041 [2024-10-28 13:45:03.084116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:49.041 [2024-10-28 13:45:03.087223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:49.041 [2024-10-28 13:45:03.087275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:49.041 [2024-10-28 13:45:03.087377] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:49.041 [2024-10-28 13:45:03.087443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:49.041 [2024-10-28 13:45:03.087552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:49.041 [2024-10-28 13:45:03.087573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:49.041 [2024-10-28 13:45:03.087874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:33:49.041 [2024-10-28 13:45:03.088052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:49.041 [2024-10-28 13:45:03.088069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:49.041 [2024-10-28 13:45:03.088403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:49.041 pt2 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:49.041 "name": "raid_bdev1", 00:33:49.041 "uuid": "0c75df8d-af83-4f34-90c8-b6af5bdacc50", 00:33:49.041 "strip_size_kb": 0, 00:33:49.041 "state": "online", 00:33:49.041 "raid_level": "raid1", 00:33:49.041 "superblock": true, 00:33:49.041 "num_base_bdevs": 2, 00:33:49.041 "num_base_bdevs_discovered": 1, 00:33:49.041 "num_base_bdevs_operational": 1, 00:33:49.041 "base_bdevs_list": [ 00:33:49.041 { 00:33:49.041 "name": null, 00:33:49.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:49.041 "is_configured": false, 00:33:49.041 "data_offset": 256, 00:33:49.041 "data_size": 7936 00:33:49.041 }, 00:33:49.041 { 00:33:49.041 "name": "pt2", 00:33:49.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:49.041 "is_configured": true, 00:33:49.041 "data_offset": 256, 00:33:49.041 "data_size": 7936 00:33:49.041 } 00:33:49.041 ] 00:33:49.041 }' 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:49.041 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:49.608 [2024-10-28 13:45:03.600472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:49.608 [2024-10-28 13:45:03.600512] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:49.608 [2024-10-28 13:45:03.600633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:49.608 [2024-10-28 13:45:03.600711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:49.608 [2024-10-28 13:45:03.600742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.608 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:49.608 [2024-10-28 13:45:03.660473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:49.608 [2024-10-28 13:45:03.660551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:49.608 [2024-10-28 13:45:03.660603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:33:49.608 [2024-10-28 13:45:03.660623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:49.608 [2024-10-28 13:45:03.663800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:49.608 [2024-10-28 13:45:03.664030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:49.608 [2024-10-28 13:45:03.664216] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:49.608 [2024-10-28 13:45:03.664265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:49.608 [2024-10-28 13:45:03.664411] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:49.608 [2024-10-28 13:45:03.664437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:49.608 [2024-10-28 13:45:03.664466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:33:49.609 [2024-10-28 13:45:03.664516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:49.609 [2024-10-28 13:45:03.664743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:33:49.609 [2024-10-28 13:45:03.664760] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:49.609 pt1 00:33:49.609 [2024-10-28 13:45:03.665081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:49.609 [2024-10-28 13:45:03.665303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:33:49.609 [2024-10-28 13:45:03.665333] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:33:49.609 [2024-10-28 13:45:03.665482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:49.609 "name": "raid_bdev1", 00:33:49.609 "uuid": "0c75df8d-af83-4f34-90c8-b6af5bdacc50", 00:33:49.609 "strip_size_kb": 0, 00:33:49.609 "state": "online", 00:33:49.609 "raid_level": "raid1", 00:33:49.609 "superblock": true, 00:33:49.609 "num_base_bdevs": 2, 00:33:49.609 "num_base_bdevs_discovered": 1, 00:33:49.609 "num_base_bdevs_operational": 1, 00:33:49.609 "base_bdevs_list": [ 00:33:49.609 { 00:33:49.609 "name": null, 00:33:49.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:49.609 "is_configured": false, 00:33:49.609 "data_offset": 256, 00:33:49.609 "data_size": 7936 00:33:49.609 }, 00:33:49.609 { 00:33:49.609 "name": "pt2", 00:33:49.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:49.609 "is_configured": true, 00:33:49.609 "data_offset": 256, 00:33:49.609 "data_size": 7936 00:33:49.609 } 00:33:49.609 ] 00:33:49.609 }' 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:49.609 13:45:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:33:50.175 [2024-10-28 13:45:04.257061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 0c75df8d-af83-4f34-90c8-b6af5bdacc50 '!=' 0c75df8d-af83-4f34-90c8-b6af5bdacc50 ']' 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 98869 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 98869 ']' 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 98869 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98869 00:33:50.175 killing process with pid 98869 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98869' 00:33:50.175 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 98869 00:33:50.175 [2024-10-28 13:45:04.331837] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:50.176 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 98869 00:33:50.176 [2024-10-28 13:45:04.331949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:50.176 [2024-10-28 13:45:04.332016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:50.176 [2024-10-28 13:45:04.332035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:33:50.433 [2024-10-28 13:45:04.356086] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:50.692 13:45:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:33:50.692 00:33:50.692 real 0m5.800s 00:33:50.692 user 0m9.832s 00:33:50.692 sys 0m0.947s 00:33:50.692 ************************************ 00:33:50.692 END TEST raid_superblock_test_4k 00:33:50.692 ************************************ 00:33:50.692 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:50.692 13:45:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:33:50.692 13:45:04 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:33:50.692 13:45:04 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:33:50.692 13:45:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:33:50.692 13:45:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:50.692 13:45:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:50.692 ************************************ 00:33:50.692 START TEST raid_rebuild_test_sb_4k 00:33:50.692 ************************************ 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:33:50.692 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:33:50.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:50.693 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=99192 00:33:50.693 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 99192 00:33:50.693 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 99192 ']' 00:33:50.693 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:50.693 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:50.693 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:50.693 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:50.693 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:50.693 13:45:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:50.693 [2024-10-28 13:45:04.773277] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:33:50.693 [2024-10-28 13:45:04.773765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99192 ] 00:33:50.693 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:50.693 Zero copy mechanism will not be used. 00:33:50.951 [2024-10-28 13:45:04.928843] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:50.951 [2024-10-28 13:45:04.960735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.951 [2024-10-28 13:45:05.016422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.951 [2024-10-28 13:45:05.080054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:50.951 [2024-10-28 13:45:05.080349] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.885 BaseBdev1_malloc 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.885 [2024-10-28 13:45:05.783733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:51.885 [2024-10-28 13:45:05.783861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:51.885 [2024-10-28 13:45:05.783906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:51.885 [2024-10-28 13:45:05.783930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:51.885 [2024-10-28 13:45:05.787046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:51.885 [2024-10-28 13:45:05.787113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:51.885 BaseBdev1 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.885 BaseBdev2_malloc 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.885 [2024-10-28 13:45:05.816546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:51.885 [2024-10-28 13:45:05.816626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:51.885 [2024-10-28 13:45:05.816657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:51.885 [2024-10-28 13:45:05.816675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:51.885 [2024-10-28 13:45:05.819688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:51.885 [2024-10-28 13:45:05.819884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:51.885 BaseBdev2 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.885 spare_malloc 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.885 spare_delay 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.885 [2024-10-28 13:45:05.853275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:51.885 [2024-10-28 13:45:05.853546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:51.885 [2024-10-28 13:45:05.853587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:51.885 [2024-10-28 13:45:05.853609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:51.885 [2024-10-28 13:45:05.856741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:51.885 [2024-10-28 13:45:05.856978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:51.885 spare 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.885 [2024-10-28 13:45:05.865464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:51.885 [2024-10-28 13:45:05.868252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:51.885 [2024-10-28 13:45:05.868506] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:33:51.885 [2024-10-28 13:45:05.868551] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:51.885 [2024-10-28 13:45:05.868927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:51.885 [2024-10-28 13:45:05.869174] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:33:51.885 [2024-10-28 13:45:05.869190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:33:51.885 [2024-10-28 13:45:05.869471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.885 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:51.885 "name": "raid_bdev1", 00:33:51.885 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:33:51.885 "strip_size_kb": 0, 00:33:51.885 "state": "online", 00:33:51.885 "raid_level": "raid1", 00:33:51.885 "superblock": true, 00:33:51.885 "num_base_bdevs": 2, 00:33:51.885 "num_base_bdevs_discovered": 2, 00:33:51.885 "num_base_bdevs_operational": 2, 00:33:51.885 "base_bdevs_list": [ 00:33:51.885 { 00:33:51.885 "name": "BaseBdev1", 00:33:51.885 "uuid": "ca45eceb-b1b3-5f82-a725-eb0a54c6d083", 00:33:51.885 "is_configured": true, 00:33:51.885 "data_offset": 256, 00:33:51.885 "data_size": 7936 00:33:51.885 }, 00:33:51.885 { 00:33:51.885 "name": "BaseBdev2", 00:33:51.885 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:33:51.885 "is_configured": true, 00:33:51.886 "data_offset": 256, 00:33:51.886 "data_size": 7936 00:33:51.886 } 00:33:51.886 ] 00:33:51.886 }' 00:33:51.886 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:51.886 13:45:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:52.452 [2024-10-28 13:45:06.410064] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.452 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:52.709 [2024-10-28 13:45:06.809846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:52.709 /dev/nbd0 00:33:52.709 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:52.709 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:52.709 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:33:52.709 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:33:52.709 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:52.709 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:52.709 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:33:52.709 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:33:52.709 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:52.709 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:52.709 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:52.709 1+0 records in 00:33:52.709 1+0 records out 00:33:52.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598893 s, 6.8 MB/s 00:33:52.709 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.968 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:33:52.968 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.968 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:52.968 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:33:52.968 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:52.968 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.968 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:33:52.968 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:33:52.968 13:45:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:33:53.903 7936+0 records in 00:33:53.903 7936+0 records out 00:33:53.903 32505856 bytes (33 MB, 31 MiB) copied, 0.833789 s, 39.0 MB/s 00:33:53.903 13:45:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:33:53.903 13:45:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:53.903 13:45:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:53.903 13:45:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:53.903 13:45:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:33:53.903 13:45:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:53.903 13:45:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:53.903 [2024-10-28 13:45:08.020719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:53.903 [2024-10-28 13:45:08.032823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:53.903 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.161 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:54.161 "name": "raid_bdev1", 00:33:54.161 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:33:54.161 "strip_size_kb": 0, 00:33:54.161 "state": "online", 00:33:54.161 "raid_level": "raid1", 00:33:54.161 "superblock": true, 00:33:54.161 "num_base_bdevs": 2, 00:33:54.161 "num_base_bdevs_discovered": 1, 00:33:54.161 "num_base_bdevs_operational": 1, 00:33:54.161 "base_bdevs_list": [ 00:33:54.161 { 00:33:54.161 "name": null, 00:33:54.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.161 "is_configured": false, 00:33:54.161 "data_offset": 0, 00:33:54.161 "data_size": 7936 00:33:54.161 }, 00:33:54.161 { 00:33:54.161 "name": "BaseBdev2", 00:33:54.161 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:33:54.161 "is_configured": true, 00:33:54.161 "data_offset": 256, 00:33:54.161 "data_size": 7936 00:33:54.161 } 00:33:54.161 ] 00:33:54.161 }' 00:33:54.162 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:54.162 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:54.419 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:54.419 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.419 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:54.419 [2024-10-28 13:45:08.541002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:54.419 [2024-10-28 13:45:08.567214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d670 00:33:54.419 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.419 13:45:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:33:54.419 [2024-10-28 13:45:08.571186] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:55.793 "name": "raid_bdev1", 00:33:55.793 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:33:55.793 "strip_size_kb": 0, 00:33:55.793 "state": "online", 00:33:55.793 "raid_level": "raid1", 00:33:55.793 "superblock": true, 00:33:55.793 "num_base_bdevs": 2, 00:33:55.793 "num_base_bdevs_discovered": 2, 00:33:55.793 "num_base_bdevs_operational": 2, 00:33:55.793 "process": { 00:33:55.793 "type": "rebuild", 00:33:55.793 "target": "spare", 00:33:55.793 "progress": { 00:33:55.793 "blocks": 2560, 00:33:55.793 "percent": 32 00:33:55.793 } 00:33:55.793 }, 00:33:55.793 "base_bdevs_list": [ 00:33:55.793 { 00:33:55.793 "name": "spare", 00:33:55.793 "uuid": "cd8348b9-9b34-57a2-acee-7bf3e2099f4f", 00:33:55.793 "is_configured": true, 00:33:55.793 "data_offset": 256, 00:33:55.793 "data_size": 7936 00:33:55.793 }, 00:33:55.793 { 00:33:55.793 "name": "BaseBdev2", 00:33:55.793 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:33:55.793 "is_configured": true, 00:33:55.793 "data_offset": 256, 00:33:55.793 "data_size": 7936 00:33:55.793 } 00:33:55.793 ] 00:33:55.793 }' 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:55.793 [2024-10-28 13:45:09.737512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:55.793 [2024-10-28 13:45:09.781519] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:55.793 [2024-10-28 13:45:09.781677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:55.793 [2024-10-28 13:45:09.781701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:55.793 [2024-10-28 13:45:09.781716] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:55.793 "name": "raid_bdev1", 00:33:55.793 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:33:55.793 "strip_size_kb": 0, 00:33:55.793 "state": "online", 00:33:55.793 "raid_level": "raid1", 00:33:55.793 "superblock": true, 00:33:55.793 "num_base_bdevs": 2, 00:33:55.793 "num_base_bdevs_discovered": 1, 00:33:55.793 "num_base_bdevs_operational": 1, 00:33:55.793 "base_bdevs_list": [ 00:33:55.793 { 00:33:55.793 "name": null, 00:33:55.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:55.793 "is_configured": false, 00:33:55.793 "data_offset": 0, 00:33:55.793 "data_size": 7936 00:33:55.793 }, 00:33:55.793 { 00:33:55.793 "name": "BaseBdev2", 00:33:55.793 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:33:55.793 "is_configured": true, 00:33:55.793 "data_offset": 256, 00:33:55.793 "data_size": 7936 00:33:55.793 } 00:33:55.793 ] 00:33:55.793 }' 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:55.793 13:45:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:56.360 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:56.360 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:56.360 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:56.360 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:56.360 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:56.360 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:56.360 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.360 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.360 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:56.360 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.360 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:56.360 "name": "raid_bdev1", 00:33:56.360 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:33:56.360 "strip_size_kb": 0, 00:33:56.360 "state": "online", 00:33:56.360 "raid_level": "raid1", 00:33:56.360 "superblock": true, 00:33:56.360 "num_base_bdevs": 2, 00:33:56.360 "num_base_bdevs_discovered": 1, 00:33:56.360 "num_base_bdevs_operational": 1, 00:33:56.360 "base_bdevs_list": [ 00:33:56.360 { 00:33:56.360 "name": null, 00:33:56.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:56.360 "is_configured": false, 00:33:56.360 "data_offset": 0, 00:33:56.360 "data_size": 7936 00:33:56.360 }, 00:33:56.360 { 00:33:56.360 "name": "BaseBdev2", 00:33:56.360 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:33:56.360 "is_configured": true, 00:33:56.360 "data_offset": 256, 00:33:56.360 "data_size": 7936 00:33:56.360 } 00:33:56.361 ] 00:33:56.361 }' 00:33:56.361 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:56.361 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:56.361 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:56.361 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:56.361 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:56.361 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.361 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:56.361 [2024-10-28 13:45:10.497399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:56.361 [2024-10-28 13:45:10.504833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d740 00:33:56.361 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.361 13:45:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:33:56.361 [2024-10-28 13:45:10.507524] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:57.737 "name": "raid_bdev1", 00:33:57.737 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:33:57.737 "strip_size_kb": 0, 00:33:57.737 "state": "online", 00:33:57.737 "raid_level": "raid1", 00:33:57.737 "superblock": true, 00:33:57.737 "num_base_bdevs": 2, 00:33:57.737 "num_base_bdevs_discovered": 2, 00:33:57.737 "num_base_bdevs_operational": 2, 00:33:57.737 "process": { 00:33:57.737 "type": "rebuild", 00:33:57.737 "target": "spare", 00:33:57.737 "progress": { 00:33:57.737 "blocks": 2560, 00:33:57.737 "percent": 32 00:33:57.737 } 00:33:57.737 }, 00:33:57.737 "base_bdevs_list": [ 00:33:57.737 { 00:33:57.737 "name": "spare", 00:33:57.737 "uuid": "cd8348b9-9b34-57a2-acee-7bf3e2099f4f", 00:33:57.737 "is_configured": true, 00:33:57.737 "data_offset": 256, 00:33:57.737 "data_size": 7936 00:33:57.737 }, 00:33:57.737 { 00:33:57.737 "name": "BaseBdev2", 00:33:57.737 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:33:57.737 "is_configured": true, 00:33:57.737 "data_offset": 256, 00:33:57.737 "data_size": 7936 00:33:57.737 } 00:33:57.737 ] 00:33:57.737 }' 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:33:57.737 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=645 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:57.737 "name": "raid_bdev1", 00:33:57.737 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:33:57.737 "strip_size_kb": 0, 00:33:57.737 "state": "online", 00:33:57.737 "raid_level": "raid1", 00:33:57.737 "superblock": true, 00:33:57.737 "num_base_bdevs": 2, 00:33:57.737 "num_base_bdevs_discovered": 2, 00:33:57.737 "num_base_bdevs_operational": 2, 00:33:57.737 "process": { 00:33:57.737 "type": "rebuild", 00:33:57.737 "target": "spare", 00:33:57.737 "progress": { 00:33:57.737 "blocks": 2816, 00:33:57.737 "percent": 35 00:33:57.737 } 00:33:57.737 }, 00:33:57.737 "base_bdevs_list": [ 00:33:57.737 { 00:33:57.737 "name": "spare", 00:33:57.737 "uuid": "cd8348b9-9b34-57a2-acee-7bf3e2099f4f", 00:33:57.737 "is_configured": true, 00:33:57.737 "data_offset": 256, 00:33:57.737 "data_size": 7936 00:33:57.737 }, 00:33:57.737 { 00:33:57.737 "name": "BaseBdev2", 00:33:57.737 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:33:57.737 "is_configured": true, 00:33:57.737 "data_offset": 256, 00:33:57.737 "data_size": 7936 00:33:57.737 } 00:33:57.737 ] 00:33:57.737 }' 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:57.737 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:57.738 13:45:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:59.112 "name": "raid_bdev1", 00:33:59.112 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:33:59.112 "strip_size_kb": 0, 00:33:59.112 "state": "online", 00:33:59.112 "raid_level": "raid1", 00:33:59.112 "superblock": true, 00:33:59.112 "num_base_bdevs": 2, 00:33:59.112 "num_base_bdevs_discovered": 2, 00:33:59.112 "num_base_bdevs_operational": 2, 00:33:59.112 "process": { 00:33:59.112 "type": "rebuild", 00:33:59.112 "target": "spare", 00:33:59.112 "progress": { 00:33:59.112 "blocks": 5888, 00:33:59.112 "percent": 74 00:33:59.112 } 00:33:59.112 }, 00:33:59.112 "base_bdevs_list": [ 00:33:59.112 { 00:33:59.112 "name": "spare", 00:33:59.112 "uuid": "cd8348b9-9b34-57a2-acee-7bf3e2099f4f", 00:33:59.112 "is_configured": true, 00:33:59.112 "data_offset": 256, 00:33:59.112 "data_size": 7936 00:33:59.112 }, 00:33:59.112 { 00:33:59.112 "name": "BaseBdev2", 00:33:59.112 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:33:59.112 "is_configured": true, 00:33:59.112 "data_offset": 256, 00:33:59.112 "data_size": 7936 00:33:59.112 } 00:33:59.112 ] 00:33:59.112 }' 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:59.112 13:45:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:59.112 13:45:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:59.112 13:45:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:59.679 [2024-10-28 13:45:13.630711] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:59.679 [2024-10-28 13:45:13.630834] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:59.679 [2024-10-28 13:45:13.630982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:59.937 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:59.937 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:59.937 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:59.937 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:59.937 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:59.937 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:59.937 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.937 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:59.937 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.937 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:33:59.937 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.937 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:59.937 "name": "raid_bdev1", 00:33:59.937 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:33:59.937 "strip_size_kb": 0, 00:33:59.937 "state": "online", 00:33:59.937 "raid_level": "raid1", 00:33:59.937 "superblock": true, 00:33:59.937 "num_base_bdevs": 2, 00:33:59.937 "num_base_bdevs_discovered": 2, 00:33:59.937 "num_base_bdevs_operational": 2, 00:33:59.937 "base_bdevs_list": [ 00:33:59.937 { 00:33:59.937 "name": "spare", 00:33:59.937 "uuid": "cd8348b9-9b34-57a2-acee-7bf3e2099f4f", 00:33:59.937 "is_configured": true, 00:33:59.937 "data_offset": 256, 00:33:59.937 "data_size": 7936 00:33:59.937 }, 00:33:59.937 { 00:33:59.937 "name": "BaseBdev2", 00:33:59.937 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:33:59.937 "is_configured": true, 00:33:59.937 "data_offset": 256, 00:33:59.937 "data_size": 7936 00:33:59.937 } 00:33:59.937 ] 00:33:59.937 }' 00:33:59.937 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:00.196 "name": "raid_bdev1", 00:34:00.196 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:34:00.196 "strip_size_kb": 0, 00:34:00.196 "state": "online", 00:34:00.196 "raid_level": "raid1", 00:34:00.196 "superblock": true, 00:34:00.196 "num_base_bdevs": 2, 00:34:00.196 "num_base_bdevs_discovered": 2, 00:34:00.196 "num_base_bdevs_operational": 2, 00:34:00.196 "base_bdevs_list": [ 00:34:00.196 { 00:34:00.196 "name": "spare", 00:34:00.196 "uuid": "cd8348b9-9b34-57a2-acee-7bf3e2099f4f", 00:34:00.196 "is_configured": true, 00:34:00.196 "data_offset": 256, 00:34:00.196 "data_size": 7936 00:34:00.196 }, 00:34:00.196 { 00:34:00.196 "name": "BaseBdev2", 00:34:00.196 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:34:00.196 "is_configured": true, 00:34:00.196 "data_offset": 256, 00:34:00.196 "data_size": 7936 00:34:00.196 } 00:34:00.196 ] 00:34:00.196 }' 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:00.196 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.455 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:00.455 "name": "raid_bdev1", 00:34:00.455 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:34:00.455 "strip_size_kb": 0, 00:34:00.455 "state": "online", 00:34:00.455 "raid_level": "raid1", 00:34:00.455 "superblock": true, 00:34:00.455 "num_base_bdevs": 2, 00:34:00.455 "num_base_bdevs_discovered": 2, 00:34:00.455 "num_base_bdevs_operational": 2, 00:34:00.455 "base_bdevs_list": [ 00:34:00.455 { 00:34:00.455 "name": "spare", 00:34:00.455 "uuid": "cd8348b9-9b34-57a2-acee-7bf3e2099f4f", 00:34:00.455 "is_configured": true, 00:34:00.455 "data_offset": 256, 00:34:00.455 "data_size": 7936 00:34:00.455 }, 00:34:00.455 { 00:34:00.455 "name": "BaseBdev2", 00:34:00.455 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:34:00.455 "is_configured": true, 00:34:00.455 "data_offset": 256, 00:34:00.455 "data_size": 7936 00:34:00.455 } 00:34:00.455 ] 00:34:00.455 }' 00:34:00.455 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:00.455 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:00.714 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:00.714 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.714 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:00.973 [2024-10-28 13:45:14.877599] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:00.973 [2024-10-28 13:45:14.877642] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:00.973 [2024-10-28 13:45:14.877759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:00.973 [2024-10-28 13:45:14.877873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:00.973 [2024-10-28 13:45:14.877889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:00.973 13:45:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:01.231 /dev/nbd0 00:34:01.231 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:01.231 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:01.231 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:34:01.231 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:34:01.231 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:01.231 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:01.231 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:34:01.231 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:34:01.231 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:01.231 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:01.232 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:01.232 1+0 records in 00:34:01.232 1+0 records out 00:34:01.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333481 s, 12.3 MB/s 00:34:01.232 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:01.232 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:34:01.232 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:01.232 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:01.232 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:34:01.232 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:01.232 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:01.232 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:34:01.489 /dev/nbd1 00:34:01.489 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:01.748 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:01.748 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:34:01.748 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:34:01.748 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:01.748 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:01.748 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:34:01.748 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:34:01.748 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:01.748 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:01.748 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:01.748 1+0 records in 00:34:01.748 1+0 records out 00:34:01.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368717 s, 11.1 MB/s 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:01.749 13:45:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:02.008 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:02.008 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:02.008 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:02.008 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:02.008 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:02.008 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:02.008 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:34:02.008 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:34:02.008 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:02.008 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:02.267 [2024-10-28 13:45:16.300060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:02.267 [2024-10-28 13:45:16.300176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:02.267 [2024-10-28 13:45:16.300216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:02.267 [2024-10-28 13:45:16.300232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:02.267 [2024-10-28 13:45:16.303189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:02.267 [2024-10-28 13:45:16.303257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:02.267 [2024-10-28 13:45:16.303364] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:02.267 [2024-10-28 13:45:16.303436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:02.267 [2024-10-28 13:45:16.303598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:02.267 spare 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:02.267 [2024-10-28 13:45:16.403734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:02.267 [2024-10-28 13:45:16.403781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:02.267 [2024-10-28 13:45:16.404281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:34:02.267 [2024-10-28 13:45:16.404532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:02.267 [2024-10-28 13:45:16.404566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:02.267 [2024-10-28 13:45:16.404794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.267 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:02.525 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.525 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:02.525 "name": "raid_bdev1", 00:34:02.525 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:34:02.525 "strip_size_kb": 0, 00:34:02.525 "state": "online", 00:34:02.525 "raid_level": "raid1", 00:34:02.525 "superblock": true, 00:34:02.525 "num_base_bdevs": 2, 00:34:02.525 "num_base_bdevs_discovered": 2, 00:34:02.525 "num_base_bdevs_operational": 2, 00:34:02.525 "base_bdevs_list": [ 00:34:02.525 { 00:34:02.525 "name": "spare", 00:34:02.525 "uuid": "cd8348b9-9b34-57a2-acee-7bf3e2099f4f", 00:34:02.525 "is_configured": true, 00:34:02.525 "data_offset": 256, 00:34:02.525 "data_size": 7936 00:34:02.525 }, 00:34:02.525 { 00:34:02.525 "name": "BaseBdev2", 00:34:02.525 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:34:02.525 "is_configured": true, 00:34:02.525 "data_offset": 256, 00:34:02.525 "data_size": 7936 00:34:02.525 } 00:34:02.525 ] 00:34:02.525 }' 00:34:02.525 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:02.525 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:02.782 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:02.782 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:02.782 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:02.782 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:02.782 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:02.782 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.782 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:02.782 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.782 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:03.040 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.040 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:03.040 "name": "raid_bdev1", 00:34:03.040 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:34:03.040 "strip_size_kb": 0, 00:34:03.040 "state": "online", 00:34:03.040 "raid_level": "raid1", 00:34:03.040 "superblock": true, 00:34:03.040 "num_base_bdevs": 2, 00:34:03.040 "num_base_bdevs_discovered": 2, 00:34:03.040 "num_base_bdevs_operational": 2, 00:34:03.040 "base_bdevs_list": [ 00:34:03.040 { 00:34:03.040 "name": "spare", 00:34:03.040 "uuid": "cd8348b9-9b34-57a2-acee-7bf3e2099f4f", 00:34:03.040 "is_configured": true, 00:34:03.040 "data_offset": 256, 00:34:03.040 "data_size": 7936 00:34:03.040 }, 00:34:03.040 { 00:34:03.040 "name": "BaseBdev2", 00:34:03.040 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:34:03.040 "is_configured": true, 00:34:03.040 "data_offset": 256, 00:34:03.040 "data_size": 7936 00:34:03.040 } 00:34:03.040 ] 00:34:03.040 }' 00:34:03.040 13:45:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:03.040 [2024-10-28 13:45:17.141123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:03.040 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:03.041 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:03.041 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:03.041 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:03.041 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:03.041 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:03.041 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:03.041 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:03.041 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.041 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:03.041 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:03.041 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.299 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:03.299 "name": "raid_bdev1", 00:34:03.299 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:34:03.299 "strip_size_kb": 0, 00:34:03.299 "state": "online", 00:34:03.299 "raid_level": "raid1", 00:34:03.299 "superblock": true, 00:34:03.299 "num_base_bdevs": 2, 00:34:03.299 "num_base_bdevs_discovered": 1, 00:34:03.299 "num_base_bdevs_operational": 1, 00:34:03.299 "base_bdevs_list": [ 00:34:03.299 { 00:34:03.299 "name": null, 00:34:03.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:03.299 "is_configured": false, 00:34:03.299 "data_offset": 0, 00:34:03.299 "data_size": 7936 00:34:03.299 }, 00:34:03.299 { 00:34:03.299 "name": "BaseBdev2", 00:34:03.299 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:34:03.299 "is_configured": true, 00:34:03.299 "data_offset": 256, 00:34:03.299 "data_size": 7936 00:34:03.299 } 00:34:03.299 ] 00:34:03.299 }' 00:34:03.299 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:03.299 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:03.558 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:03.558 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.558 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:03.558 [2024-10-28 13:45:17.685473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:03.558 [2024-10-28 13:45:17.685748] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:03.558 [2024-10-28 13:45:17.685795] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:03.558 [2024-10-28 13:45:17.685887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:03.558 [2024-10-28 13:45:17.693432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2030 00:34:03.558 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.558 13:45:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:34:03.558 [2024-10-28 13:45:17.696213] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:04.952 "name": "raid_bdev1", 00:34:04.952 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:34:04.952 "strip_size_kb": 0, 00:34:04.952 "state": "online", 00:34:04.952 "raid_level": "raid1", 00:34:04.952 "superblock": true, 00:34:04.952 "num_base_bdevs": 2, 00:34:04.952 "num_base_bdevs_discovered": 2, 00:34:04.952 "num_base_bdevs_operational": 2, 00:34:04.952 "process": { 00:34:04.952 "type": "rebuild", 00:34:04.952 "target": "spare", 00:34:04.952 "progress": { 00:34:04.952 "blocks": 2560, 00:34:04.952 "percent": 32 00:34:04.952 } 00:34:04.952 }, 00:34:04.952 "base_bdevs_list": [ 00:34:04.952 { 00:34:04.952 "name": "spare", 00:34:04.952 "uuid": "cd8348b9-9b34-57a2-acee-7bf3e2099f4f", 00:34:04.952 "is_configured": true, 00:34:04.952 "data_offset": 256, 00:34:04.952 "data_size": 7936 00:34:04.952 }, 00:34:04.952 { 00:34:04.952 "name": "BaseBdev2", 00:34:04.952 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:34:04.952 "is_configured": true, 00:34:04.952 "data_offset": 256, 00:34:04.952 "data_size": 7936 00:34:04.952 } 00:34:04.952 ] 00:34:04.952 }' 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:04.952 [2024-10-28 13:45:18.867464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:04.952 [2024-10-28 13:45:18.905265] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:04.952 [2024-10-28 13:45:18.905343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:04.952 [2024-10-28 13:45:18.905368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:04.952 [2024-10-28 13:45:18.905382] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:04.952 "name": "raid_bdev1", 00:34:04.952 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:34:04.952 "strip_size_kb": 0, 00:34:04.952 "state": "online", 00:34:04.952 "raid_level": "raid1", 00:34:04.952 "superblock": true, 00:34:04.952 "num_base_bdevs": 2, 00:34:04.952 "num_base_bdevs_discovered": 1, 00:34:04.952 "num_base_bdevs_operational": 1, 00:34:04.952 "base_bdevs_list": [ 00:34:04.952 { 00:34:04.952 "name": null, 00:34:04.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:04.952 "is_configured": false, 00:34:04.952 "data_offset": 0, 00:34:04.952 "data_size": 7936 00:34:04.952 }, 00:34:04.952 { 00:34:04.952 "name": "BaseBdev2", 00:34:04.952 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:34:04.952 "is_configured": true, 00:34:04.952 "data_offset": 256, 00:34:04.952 "data_size": 7936 00:34:04.952 } 00:34:04.952 ] 00:34:04.952 }' 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:04.952 13:45:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:05.520 13:45:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:05.520 13:45:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.520 13:45:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:05.520 [2024-10-28 13:45:19.451882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:05.520 [2024-10-28 13:45:19.452007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:05.520 [2024-10-28 13:45:19.452041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:34:05.520 [2024-10-28 13:45:19.452059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:05.520 [2024-10-28 13:45:19.452671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:05.520 [2024-10-28 13:45:19.452714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:05.520 [2024-10-28 13:45:19.452829] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:05.520 [2024-10-28 13:45:19.452858] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:05.520 [2024-10-28 13:45:19.452887] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:05.520 [2024-10-28 13:45:19.452944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:05.520 [2024-10-28 13:45:19.460328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:34:05.520 spare 00:34:05.520 13:45:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.520 13:45:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:34:05.520 [2024-10-28 13:45:19.463141] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:06.454 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:06.454 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:06.454 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:06.454 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:06.454 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:06.454 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:06.454 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.454 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.454 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:06.454 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.454 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:06.454 "name": "raid_bdev1", 00:34:06.454 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:34:06.454 "strip_size_kb": 0, 00:34:06.454 "state": "online", 00:34:06.454 "raid_level": "raid1", 00:34:06.454 "superblock": true, 00:34:06.454 "num_base_bdevs": 2, 00:34:06.454 "num_base_bdevs_discovered": 2, 00:34:06.454 "num_base_bdevs_operational": 2, 00:34:06.454 "process": { 00:34:06.454 "type": "rebuild", 00:34:06.454 "target": "spare", 00:34:06.454 "progress": { 00:34:06.454 "blocks": 2560, 00:34:06.454 "percent": 32 00:34:06.454 } 00:34:06.454 }, 00:34:06.454 "base_bdevs_list": [ 00:34:06.454 { 00:34:06.454 "name": "spare", 00:34:06.454 "uuid": "cd8348b9-9b34-57a2-acee-7bf3e2099f4f", 00:34:06.454 "is_configured": true, 00:34:06.454 "data_offset": 256, 00:34:06.454 "data_size": 7936 00:34:06.454 }, 00:34:06.454 { 00:34:06.454 "name": "BaseBdev2", 00:34:06.454 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:34:06.454 "is_configured": true, 00:34:06.454 "data_offset": 256, 00:34:06.454 "data_size": 7936 00:34:06.454 } 00:34:06.454 ] 00:34:06.454 }' 00:34:06.454 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:06.454 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:06.454 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:06.714 [2024-10-28 13:45:20.632782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:06.714 [2024-10-28 13:45:20.671893] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:06.714 [2024-10-28 13:45:20.671996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:06.714 [2024-10-28 13:45:20.672022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:06.714 [2024-10-28 13:45:20.672033] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.714 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:06.714 "name": "raid_bdev1", 00:34:06.714 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:34:06.714 "strip_size_kb": 0, 00:34:06.714 "state": "online", 00:34:06.714 "raid_level": "raid1", 00:34:06.714 "superblock": true, 00:34:06.714 "num_base_bdevs": 2, 00:34:06.714 "num_base_bdevs_discovered": 1, 00:34:06.714 "num_base_bdevs_operational": 1, 00:34:06.714 "base_bdevs_list": [ 00:34:06.714 { 00:34:06.714 "name": null, 00:34:06.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:06.715 "is_configured": false, 00:34:06.715 "data_offset": 0, 00:34:06.715 "data_size": 7936 00:34:06.715 }, 00:34:06.715 { 00:34:06.715 "name": "BaseBdev2", 00:34:06.715 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:34:06.715 "is_configured": true, 00:34:06.715 "data_offset": 256, 00:34:06.715 "data_size": 7936 00:34:06.715 } 00:34:06.715 ] 00:34:06.715 }' 00:34:06.715 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:06.715 13:45:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:07.281 "name": "raid_bdev1", 00:34:07.281 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:34:07.281 "strip_size_kb": 0, 00:34:07.281 "state": "online", 00:34:07.281 "raid_level": "raid1", 00:34:07.281 "superblock": true, 00:34:07.281 "num_base_bdevs": 2, 00:34:07.281 "num_base_bdevs_discovered": 1, 00:34:07.281 "num_base_bdevs_operational": 1, 00:34:07.281 "base_bdevs_list": [ 00:34:07.281 { 00:34:07.281 "name": null, 00:34:07.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.281 "is_configured": false, 00:34:07.281 "data_offset": 0, 00:34:07.281 "data_size": 7936 00:34:07.281 }, 00:34:07.281 { 00:34:07.281 "name": "BaseBdev2", 00:34:07.281 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:34:07.281 "is_configured": true, 00:34:07.281 "data_offset": 256, 00:34:07.281 "data_size": 7936 00:34:07.281 } 00:34:07.281 ] 00:34:07.281 }' 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:07.281 [2024-10-28 13:45:21.403230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:07.281 [2024-10-28 13:45:21.403311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:07.281 [2024-10-28 13:45:21.403346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:34:07.281 [2024-10-28 13:45:21.403361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:07.281 [2024-10-28 13:45:21.403890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:07.281 [2024-10-28 13:45:21.403932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:07.281 [2024-10-28 13:45:21.404040] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:07.281 [2024-10-28 13:45:21.404061] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:07.281 [2024-10-28 13:45:21.404074] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:07.281 [2024-10-28 13:45:21.404103] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:34:07.281 BaseBdev1 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.281 13:45:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:08.656 "name": "raid_bdev1", 00:34:08.656 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:34:08.656 "strip_size_kb": 0, 00:34:08.656 "state": "online", 00:34:08.656 "raid_level": "raid1", 00:34:08.656 "superblock": true, 00:34:08.656 "num_base_bdevs": 2, 00:34:08.656 "num_base_bdevs_discovered": 1, 00:34:08.656 "num_base_bdevs_operational": 1, 00:34:08.656 "base_bdevs_list": [ 00:34:08.656 { 00:34:08.656 "name": null, 00:34:08.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:08.656 "is_configured": false, 00:34:08.656 "data_offset": 0, 00:34:08.656 "data_size": 7936 00:34:08.656 }, 00:34:08.656 { 00:34:08.656 "name": "BaseBdev2", 00:34:08.656 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:34:08.656 "is_configured": true, 00:34:08.656 "data_offset": 256, 00:34:08.656 "data_size": 7936 00:34:08.656 } 00:34:08.656 ] 00:34:08.656 }' 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:08.656 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:08.914 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:08.914 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:08.914 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:08.914 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:08.914 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:08.914 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:08.914 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:08.914 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.914 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:08.914 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.914 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:08.914 "name": "raid_bdev1", 00:34:08.914 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:34:08.914 "strip_size_kb": 0, 00:34:08.914 "state": "online", 00:34:08.914 "raid_level": "raid1", 00:34:08.914 "superblock": true, 00:34:08.914 "num_base_bdevs": 2, 00:34:08.914 "num_base_bdevs_discovered": 1, 00:34:08.914 "num_base_bdevs_operational": 1, 00:34:08.914 "base_bdevs_list": [ 00:34:08.914 { 00:34:08.914 "name": null, 00:34:08.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:08.914 "is_configured": false, 00:34:08.914 "data_offset": 0, 00:34:08.914 "data_size": 7936 00:34:08.914 }, 00:34:08.914 { 00:34:08.914 "name": "BaseBdev2", 00:34:08.914 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:34:08.914 "is_configured": true, 00:34:08.914 "data_offset": 256, 00:34:08.914 "data_size": 7936 00:34:08.914 } 00:34:08.914 ] 00:34:08.914 }' 00:34:08.914 13:45:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:08.914 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:08.914 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:09.172 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:09.172 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:09.172 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:34:09.172 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:09.172 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:09.172 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:09.172 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:09.172 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:09.172 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:09.172 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.172 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:09.172 [2024-10-28 13:45:23.115887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:09.172 [2024-10-28 13:45:23.116135] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:09.172 [2024-10-28 13:45:23.116198] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:09.172 request: 00:34:09.172 { 00:34:09.172 "base_bdev": "BaseBdev1", 00:34:09.172 "raid_bdev": "raid_bdev1", 00:34:09.172 "method": "bdev_raid_add_base_bdev", 00:34:09.172 "req_id": 1 00:34:09.172 } 00:34:09.172 Got JSON-RPC error response 00:34:09.172 response: 00:34:09.173 { 00:34:09.173 "code": -22, 00:34:09.173 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:09.173 } 00:34:09.173 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:09.173 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:34:09.173 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:09.173 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:09.173 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:09.173 13:45:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:10.109 "name": "raid_bdev1", 00:34:10.109 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:34:10.109 "strip_size_kb": 0, 00:34:10.109 "state": "online", 00:34:10.109 "raid_level": "raid1", 00:34:10.109 "superblock": true, 00:34:10.109 "num_base_bdevs": 2, 00:34:10.109 "num_base_bdevs_discovered": 1, 00:34:10.109 "num_base_bdevs_operational": 1, 00:34:10.109 "base_bdevs_list": [ 00:34:10.109 { 00:34:10.109 "name": null, 00:34:10.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:10.109 "is_configured": false, 00:34:10.109 "data_offset": 0, 00:34:10.109 "data_size": 7936 00:34:10.109 }, 00:34:10.109 { 00:34:10.109 "name": "BaseBdev2", 00:34:10.109 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:34:10.109 "is_configured": true, 00:34:10.109 "data_offset": 256, 00:34:10.109 "data_size": 7936 00:34:10.109 } 00:34:10.109 ] 00:34:10.109 }' 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:10.109 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:10.677 "name": "raid_bdev1", 00:34:10.677 "uuid": "4114523f-d5aa-40e6-936d-8262443896f1", 00:34:10.677 "strip_size_kb": 0, 00:34:10.677 "state": "online", 00:34:10.677 "raid_level": "raid1", 00:34:10.677 "superblock": true, 00:34:10.677 "num_base_bdevs": 2, 00:34:10.677 "num_base_bdevs_discovered": 1, 00:34:10.677 "num_base_bdevs_operational": 1, 00:34:10.677 "base_bdevs_list": [ 00:34:10.677 { 00:34:10.677 "name": null, 00:34:10.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:10.677 "is_configured": false, 00:34:10.677 "data_offset": 0, 00:34:10.677 "data_size": 7936 00:34:10.677 }, 00:34:10.677 { 00:34:10.677 "name": "BaseBdev2", 00:34:10.677 "uuid": "bcd721c0-480e-539e-bdca-0fb536362a89", 00:34:10.677 "is_configured": true, 00:34:10.677 "data_offset": 256, 00:34:10.677 "data_size": 7936 00:34:10.677 } 00:34:10.677 ] 00:34:10.677 }' 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 99192 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 99192 ']' 00:34:10.677 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 99192 00:34:10.936 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:34:10.936 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:10.936 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99192 00:34:10.936 killing process with pid 99192 00:34:10.936 Received shutdown signal, test time was about 60.000000 seconds 00:34:10.936 00:34:10.936 Latency(us) 00:34:10.936 [2024-10-28T13:45:25.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.936 [2024-10-28T13:45:25.096Z] =================================================================================================================== 00:34:10.936 [2024-10-28T13:45:25.096Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:10.936 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:10.936 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:10.936 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99192' 00:34:10.936 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 99192 00:34:10.936 [2024-10-28 13:45:24.869934] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:10.936 13:45:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 99192 00:34:10.936 [2024-10-28 13:45:24.870113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:10.936 [2024-10-28 13:45:24.870210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:10.936 [2024-10-28 13:45:24.870229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:10.936 [2024-10-28 13:45:24.903181] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:11.195 ************************************ 00:34:11.195 END TEST raid_rebuild_test_sb_4k 00:34:11.195 ************************************ 00:34:11.195 13:45:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:34:11.195 00:34:11.195 real 0m20.495s 00:34:11.195 user 0m28.438s 00:34:11.195 sys 0m2.472s 00:34:11.195 13:45:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:11.195 13:45:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:11.195 13:45:25 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:34:11.195 13:45:25 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:34:11.195 13:45:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:34:11.195 13:45:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:11.195 13:45:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:11.195 ************************************ 00:34:11.195 START TEST raid_state_function_test_sb_md_separate 00:34:11.195 ************************************ 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=99887 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99887' 00:34:11.195 Process raid pid: 99887 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 99887 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 99887 ']' 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.195 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:11.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.196 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.196 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:11.196 13:45:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:11.455 [2024-10-28 13:45:25.353130] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:34:11.455 [2024-10-28 13:45:25.354010] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.455 [2024-10-28 13:45:25.516485] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:11.455 [2024-10-28 13:45:25.546659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.455 [2024-10-28 13:45:25.587117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.713 [2024-10-28 13:45:25.644519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:11.713 [2024-10-28 13:45:25.644565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.281 [2024-10-28 13:45:26.325343] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:12.281 [2024-10-28 13:45:26.325401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:12.281 [2024-10-28 13:45:26.325433] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:12.281 [2024-10-28 13:45:26.325448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:12.281 "name": "Existed_Raid", 00:34:12.281 "uuid": "825651ef-2694-48c7-8a01-7a295eec8abe", 00:34:12.281 "strip_size_kb": 0, 00:34:12.281 "state": "configuring", 00:34:12.281 "raid_level": "raid1", 00:34:12.281 "superblock": true, 00:34:12.281 "num_base_bdevs": 2, 00:34:12.281 "num_base_bdevs_discovered": 0, 00:34:12.281 "num_base_bdevs_operational": 2, 00:34:12.281 "base_bdevs_list": [ 00:34:12.281 { 00:34:12.281 "name": "BaseBdev1", 00:34:12.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:12.281 "is_configured": false, 00:34:12.281 "data_offset": 0, 00:34:12.281 "data_size": 0 00:34:12.281 }, 00:34:12.281 { 00:34:12.281 "name": "BaseBdev2", 00:34:12.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:12.281 "is_configured": false, 00:34:12.281 "data_offset": 0, 00:34:12.281 "data_size": 0 00:34:12.281 } 00:34:12.281 ] 00:34:12.281 }' 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:12.281 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.847 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:12.847 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.847 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.847 [2024-10-28 13:45:26.849455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:12.847 [2024-10-28 13:45:26.849499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:34:12.847 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.847 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:12.847 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.847 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.847 [2024-10-28 13:45:26.857476] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:12.847 [2024-10-28 13:45:26.857520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:12.847 [2024-10-28 13:45:26.857555] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:12.847 [2024-10-28 13:45:26.857569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:12.847 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.847 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:34:12.847 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.847 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.847 [2024-10-28 13:45:26.879304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:12.847 BaseBdev1 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.848 [ 00:34:12.848 { 00:34:12.848 "name": "BaseBdev1", 00:34:12.848 "aliases": [ 00:34:12.848 "1725c35d-12ad-46c6-8503-5f206f35cf05" 00:34:12.848 ], 00:34:12.848 "product_name": "Malloc disk", 00:34:12.848 "block_size": 4096, 00:34:12.848 "num_blocks": 8192, 00:34:12.848 "uuid": "1725c35d-12ad-46c6-8503-5f206f35cf05", 00:34:12.848 "md_size": 32, 00:34:12.848 "md_interleave": false, 00:34:12.848 "dif_type": 0, 00:34:12.848 "assigned_rate_limits": { 00:34:12.848 "rw_ios_per_sec": 0, 00:34:12.848 "rw_mbytes_per_sec": 0, 00:34:12.848 "r_mbytes_per_sec": 0, 00:34:12.848 "w_mbytes_per_sec": 0 00:34:12.848 }, 00:34:12.848 "claimed": true, 00:34:12.848 "claim_type": "exclusive_write", 00:34:12.848 "zoned": false, 00:34:12.848 "supported_io_types": { 00:34:12.848 "read": true, 00:34:12.848 "write": true, 00:34:12.848 "unmap": true, 00:34:12.848 "flush": true, 00:34:12.848 "reset": true, 00:34:12.848 "nvme_admin": false, 00:34:12.848 "nvme_io": false, 00:34:12.848 "nvme_io_md": false, 00:34:12.848 "write_zeroes": true, 00:34:12.848 "zcopy": true, 00:34:12.848 "get_zone_info": false, 00:34:12.848 "zone_management": false, 00:34:12.848 "zone_append": false, 00:34:12.848 "compare": false, 00:34:12.848 "compare_and_write": false, 00:34:12.848 "abort": true, 00:34:12.848 "seek_hole": false, 00:34:12.848 "seek_data": false, 00:34:12.848 "copy": true, 00:34:12.848 "nvme_iov_md": false 00:34:12.848 }, 00:34:12.848 "memory_domains": [ 00:34:12.848 { 00:34:12.848 "dma_device_id": "system", 00:34:12.848 "dma_device_type": 1 00:34:12.848 }, 00:34:12.848 { 00:34:12.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:12.848 "dma_device_type": 2 00:34:12.848 } 00:34:12.848 ], 00:34:12.848 "driver_specific": {} 00:34:12.848 } 00:34:12.848 ] 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:12.848 "name": "Existed_Raid", 00:34:12.848 "uuid": "28f5e248-8568-474c-8fd2-b6ffdcf487ae", 00:34:12.848 "strip_size_kb": 0, 00:34:12.848 "state": "configuring", 00:34:12.848 "raid_level": "raid1", 00:34:12.848 "superblock": true, 00:34:12.848 "num_base_bdevs": 2, 00:34:12.848 "num_base_bdevs_discovered": 1, 00:34:12.848 "num_base_bdevs_operational": 2, 00:34:12.848 "base_bdevs_list": [ 00:34:12.848 { 00:34:12.848 "name": "BaseBdev1", 00:34:12.848 "uuid": "1725c35d-12ad-46c6-8503-5f206f35cf05", 00:34:12.848 "is_configured": true, 00:34:12.848 "data_offset": 256, 00:34:12.848 "data_size": 7936 00:34:12.848 }, 00:34:12.848 { 00:34:12.848 "name": "BaseBdev2", 00:34:12.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:12.848 "is_configured": false, 00:34:12.848 "data_offset": 0, 00:34:12.848 "data_size": 0 00:34:12.848 } 00:34:12.848 ] 00:34:12.848 }' 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:12.848 13:45:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.416 [2024-10-28 13:45:27.435613] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:13.416 [2024-10-28 13:45:27.435685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.416 [2024-10-28 13:45:27.443680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:13.416 [2024-10-28 13:45:27.446461] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:13.416 [2024-10-28 13:45:27.446683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:13.416 "name": "Existed_Raid", 00:34:13.416 "uuid": "0e415293-a6ff-4615-8168-6c6b481a947c", 00:34:13.416 "strip_size_kb": 0, 00:34:13.416 "state": "configuring", 00:34:13.416 "raid_level": "raid1", 00:34:13.416 "superblock": true, 00:34:13.416 "num_base_bdevs": 2, 00:34:13.416 "num_base_bdevs_discovered": 1, 00:34:13.416 "num_base_bdevs_operational": 2, 00:34:13.416 "base_bdevs_list": [ 00:34:13.416 { 00:34:13.416 "name": "BaseBdev1", 00:34:13.416 "uuid": "1725c35d-12ad-46c6-8503-5f206f35cf05", 00:34:13.416 "is_configured": true, 00:34:13.416 "data_offset": 256, 00:34:13.416 "data_size": 7936 00:34:13.416 }, 00:34:13.416 { 00:34:13.416 "name": "BaseBdev2", 00:34:13.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:13.416 "is_configured": false, 00:34:13.416 "data_offset": 0, 00:34:13.416 "data_size": 0 00:34:13.416 } 00:34:13.416 ] 00:34:13.416 }' 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:13.416 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.984 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:34:13.984 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.984 13:45:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.984 [2024-10-28 13:45:28.007397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:13.984 [2024-10-28 13:45:28.007864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:34:13.984 [2024-10-28 13:45:28.007894] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:13.984 [2024-10-28 13:45:28.008010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:34:13.984 BaseBdev2 00:34:13.984 [2024-10-28 13:45:28.008231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:34:13.984 [2024-10-28 13:45:28.008248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:34:13.984 [2024-10-28 13:45:28.008380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.984 [ 00:34:13.984 { 00:34:13.984 "name": "BaseBdev2", 00:34:13.984 "aliases": [ 00:34:13.984 "457fc9e9-b248-4e06-92e7-519668099bf7" 00:34:13.984 ], 00:34:13.984 "product_name": "Malloc disk", 00:34:13.984 "block_size": 4096, 00:34:13.984 "num_blocks": 8192, 00:34:13.984 "uuid": "457fc9e9-b248-4e06-92e7-519668099bf7", 00:34:13.984 "md_size": 32, 00:34:13.984 "md_interleave": false, 00:34:13.984 "dif_type": 0, 00:34:13.984 "assigned_rate_limits": { 00:34:13.984 "rw_ios_per_sec": 0, 00:34:13.984 "rw_mbytes_per_sec": 0, 00:34:13.984 "r_mbytes_per_sec": 0, 00:34:13.984 "w_mbytes_per_sec": 0 00:34:13.984 }, 00:34:13.984 "claimed": true, 00:34:13.984 "claim_type": "exclusive_write", 00:34:13.984 "zoned": false, 00:34:13.984 "supported_io_types": { 00:34:13.984 "read": true, 00:34:13.984 "write": true, 00:34:13.984 "unmap": true, 00:34:13.984 "flush": true, 00:34:13.984 "reset": true, 00:34:13.984 "nvme_admin": false, 00:34:13.984 "nvme_io": false, 00:34:13.984 "nvme_io_md": false, 00:34:13.984 "write_zeroes": true, 00:34:13.984 "zcopy": true, 00:34:13.984 "get_zone_info": false, 00:34:13.984 "zone_management": false, 00:34:13.984 "zone_append": false, 00:34:13.984 "compare": false, 00:34:13.984 "compare_and_write": false, 00:34:13.984 "abort": true, 00:34:13.984 "seek_hole": false, 00:34:13.984 "seek_data": false, 00:34:13.984 "copy": true, 00:34:13.984 "nvme_iov_md": false 00:34:13.984 }, 00:34:13.984 "memory_domains": [ 00:34:13.984 { 00:34:13.984 "dma_device_id": "system", 00:34:13.984 "dma_device_type": 1 00:34:13.984 }, 00:34:13.984 { 00:34:13.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:13.984 "dma_device_type": 2 00:34:13.984 } 00:34:13.984 ], 00:34:13.984 "driver_specific": {} 00:34:13.984 } 00:34:13.984 ] 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:13.984 "name": "Existed_Raid", 00:34:13.984 "uuid": "0e415293-a6ff-4615-8168-6c6b481a947c", 00:34:13.984 "strip_size_kb": 0, 00:34:13.984 "state": "online", 00:34:13.984 "raid_level": "raid1", 00:34:13.984 "superblock": true, 00:34:13.984 "num_base_bdevs": 2, 00:34:13.984 "num_base_bdevs_discovered": 2, 00:34:13.984 "num_base_bdevs_operational": 2, 00:34:13.984 "base_bdevs_list": [ 00:34:13.984 { 00:34:13.984 "name": "BaseBdev1", 00:34:13.984 "uuid": "1725c35d-12ad-46c6-8503-5f206f35cf05", 00:34:13.984 "is_configured": true, 00:34:13.984 "data_offset": 256, 00:34:13.984 "data_size": 7936 00:34:13.984 }, 00:34:13.984 { 00:34:13.984 "name": "BaseBdev2", 00:34:13.984 "uuid": "457fc9e9-b248-4e06-92e7-519668099bf7", 00:34:13.984 "is_configured": true, 00:34:13.984 "data_offset": 256, 00:34:13.984 "data_size": 7936 00:34:13.984 } 00:34:13.984 ] 00:34:13.984 }' 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:13.984 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:14.550 [2024-10-28 13:45:28.584114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:14.550 "name": "Existed_Raid", 00:34:14.550 "aliases": [ 00:34:14.550 "0e415293-a6ff-4615-8168-6c6b481a947c" 00:34:14.550 ], 00:34:14.550 "product_name": "Raid Volume", 00:34:14.550 "block_size": 4096, 00:34:14.550 "num_blocks": 7936, 00:34:14.550 "uuid": "0e415293-a6ff-4615-8168-6c6b481a947c", 00:34:14.550 "md_size": 32, 00:34:14.550 "md_interleave": false, 00:34:14.550 "dif_type": 0, 00:34:14.550 "assigned_rate_limits": { 00:34:14.550 "rw_ios_per_sec": 0, 00:34:14.550 "rw_mbytes_per_sec": 0, 00:34:14.550 "r_mbytes_per_sec": 0, 00:34:14.550 "w_mbytes_per_sec": 0 00:34:14.550 }, 00:34:14.550 "claimed": false, 00:34:14.550 "zoned": false, 00:34:14.550 "supported_io_types": { 00:34:14.550 "read": true, 00:34:14.550 "write": true, 00:34:14.550 "unmap": false, 00:34:14.550 "flush": false, 00:34:14.550 "reset": true, 00:34:14.550 "nvme_admin": false, 00:34:14.550 "nvme_io": false, 00:34:14.550 "nvme_io_md": false, 00:34:14.550 "write_zeroes": true, 00:34:14.550 "zcopy": false, 00:34:14.550 "get_zone_info": false, 00:34:14.550 "zone_management": false, 00:34:14.550 "zone_append": false, 00:34:14.550 "compare": false, 00:34:14.550 "compare_and_write": false, 00:34:14.550 "abort": false, 00:34:14.550 "seek_hole": false, 00:34:14.550 "seek_data": false, 00:34:14.550 "copy": false, 00:34:14.550 "nvme_iov_md": false 00:34:14.550 }, 00:34:14.550 "memory_domains": [ 00:34:14.550 { 00:34:14.550 "dma_device_id": "system", 00:34:14.550 "dma_device_type": 1 00:34:14.550 }, 00:34:14.550 { 00:34:14.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:14.550 "dma_device_type": 2 00:34:14.550 }, 00:34:14.550 { 00:34:14.550 "dma_device_id": "system", 00:34:14.550 "dma_device_type": 1 00:34:14.550 }, 00:34:14.550 { 00:34:14.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:14.550 "dma_device_type": 2 00:34:14.550 } 00:34:14.550 ], 00:34:14.550 "driver_specific": { 00:34:14.550 "raid": { 00:34:14.550 "uuid": "0e415293-a6ff-4615-8168-6c6b481a947c", 00:34:14.550 "strip_size_kb": 0, 00:34:14.550 "state": "online", 00:34:14.550 "raid_level": "raid1", 00:34:14.550 "superblock": true, 00:34:14.550 "num_base_bdevs": 2, 00:34:14.550 "num_base_bdevs_discovered": 2, 00:34:14.550 "num_base_bdevs_operational": 2, 00:34:14.550 "base_bdevs_list": [ 00:34:14.550 { 00:34:14.550 "name": "BaseBdev1", 00:34:14.550 "uuid": "1725c35d-12ad-46c6-8503-5f206f35cf05", 00:34:14.550 "is_configured": true, 00:34:14.550 "data_offset": 256, 00:34:14.550 "data_size": 7936 00:34:14.550 }, 00:34:14.550 { 00:34:14.550 "name": "BaseBdev2", 00:34:14.550 "uuid": "457fc9e9-b248-4e06-92e7-519668099bf7", 00:34:14.550 "is_configured": true, 00:34:14.550 "data_offset": 256, 00:34:14.550 "data_size": 7936 00:34:14.550 } 00:34:14.550 ] 00:34:14.550 } 00:34:14.550 } 00:34:14.550 }' 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:34:14.550 BaseBdev2' 00:34:14.550 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:14.809 [2024-10-28 13:45:28.855852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:14.809 "name": "Existed_Raid", 00:34:14.809 "uuid": "0e415293-a6ff-4615-8168-6c6b481a947c", 00:34:14.809 "strip_size_kb": 0, 00:34:14.809 "state": "online", 00:34:14.809 "raid_level": "raid1", 00:34:14.809 "superblock": true, 00:34:14.809 "num_base_bdevs": 2, 00:34:14.809 "num_base_bdevs_discovered": 1, 00:34:14.809 "num_base_bdevs_operational": 1, 00:34:14.809 "base_bdevs_list": [ 00:34:14.809 { 00:34:14.809 "name": null, 00:34:14.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:14.809 "is_configured": false, 00:34:14.809 "data_offset": 0, 00:34:14.809 "data_size": 7936 00:34:14.809 }, 00:34:14.809 { 00:34:14.809 "name": "BaseBdev2", 00:34:14.809 "uuid": "457fc9e9-b248-4e06-92e7-519668099bf7", 00:34:14.809 "is_configured": true, 00:34:14.809 "data_offset": 256, 00:34:14.809 "data_size": 7936 00:34:14.809 } 00:34:14.809 ] 00:34:14.809 }' 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:14.809 13:45:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:15.376 [2024-10-28 13:45:29.457693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:15.376 [2024-10-28 13:45:29.458052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:15.376 [2024-10-28 13:45:29.471654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:15.376 [2024-10-28 13:45:29.471964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:15.376 [2024-10-28 13:45:29.472162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:34:15.376 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 99887 00:34:15.635 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 99887 ']' 00:34:15.635 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 99887 00:34:15.635 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:34:15.635 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:15.635 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99887 00:34:15.635 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:15.635 killing process with pid 99887 00:34:15.635 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:15.635 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99887' 00:34:15.635 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 99887 00:34:15.635 [2024-10-28 13:45:29.567357] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:15.635 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 99887 00:34:15.635 [2024-10-28 13:45:29.568811] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:15.893 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:34:15.893 00:34:15.893 real 0m4.609s 00:34:15.893 user 0m7.563s 00:34:15.893 sys 0m0.748s 00:34:15.893 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:15.893 13:45:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:15.893 ************************************ 00:34:15.893 END TEST raid_state_function_test_sb_md_separate 00:34:15.893 ************************************ 00:34:15.893 13:45:29 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:34:15.893 13:45:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:15.893 13:45:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:15.893 13:45:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:15.893 ************************************ 00:34:15.893 START TEST raid_superblock_test_md_separate 00:34:15.893 ************************************ 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=100128 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 100128 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:34:15.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 100128 ']' 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:15.893 13:45:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:15.893 [2024-10-28 13:45:29.992383] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:34:15.893 [2024-10-28 13:45:29.992597] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100128 ] 00:34:16.151 [2024-10-28 13:45:30.144771] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:16.151 [2024-10-28 13:45:30.177653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.151 [2024-10-28 13:45:30.227928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.151 [2024-10-28 13:45:30.290745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:16.151 [2024-10-28 13:45:30.290788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:17.087 13:45:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:17.087 13:45:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:34:17.087 13:45:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:34:17.087 13:45:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:17.087 13:45:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:34:17.087 13:45:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:34:17.087 13:45:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:17.087 13:45:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:17.087 13:45:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:17.087 13:45:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:17.087 13:45:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:34:17.087 13:45:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.087 13:45:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.087 malloc1 00:34:17.087 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.087 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:17.087 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.087 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.087 [2024-10-28 13:45:31.021539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:17.087 [2024-10-28 13:45:31.021642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:17.087 [2024-10-28 13:45:31.021678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:17.087 [2024-10-28 13:45:31.021704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:17.088 [2024-10-28 13:45:31.024857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:17.088 [2024-10-28 13:45:31.024939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:17.088 pt1 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.088 malloc2 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.088 [2024-10-28 13:45:31.056446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:17.088 [2024-10-28 13:45:31.056674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:17.088 [2024-10-28 13:45:31.056753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:17.088 [2024-10-28 13:45:31.056883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:17.088 [2024-10-28 13:45:31.059612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:17.088 [2024-10-28 13:45:31.059771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:17.088 pt2 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.088 [2024-10-28 13:45:31.068613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:17.088 [2024-10-28 13:45:31.071350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:17.088 [2024-10-28 13:45:31.071598] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:34:17.088 [2024-10-28 13:45:31.071621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:17.088 [2024-10-28 13:45:31.071746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:34:17.088 [2024-10-28 13:45:31.071895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:34:17.088 [2024-10-28 13:45:31.071950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:34:17.088 [2024-10-28 13:45:31.072072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:17.088 "name": "raid_bdev1", 00:34:17.088 "uuid": "bd8ad315-ff1b-43a8-977e-7fda9b69ef8f", 00:34:17.088 "strip_size_kb": 0, 00:34:17.088 "state": "online", 00:34:17.088 "raid_level": "raid1", 00:34:17.088 "superblock": true, 00:34:17.088 "num_base_bdevs": 2, 00:34:17.088 "num_base_bdevs_discovered": 2, 00:34:17.088 "num_base_bdevs_operational": 2, 00:34:17.088 "base_bdevs_list": [ 00:34:17.088 { 00:34:17.088 "name": "pt1", 00:34:17.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:17.088 "is_configured": true, 00:34:17.088 "data_offset": 256, 00:34:17.088 "data_size": 7936 00:34:17.088 }, 00:34:17.088 { 00:34:17.088 "name": "pt2", 00:34:17.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:17.088 "is_configured": true, 00:34:17.088 "data_offset": 256, 00:34:17.088 "data_size": 7936 00:34:17.088 } 00:34:17.088 ] 00:34:17.088 }' 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:17.088 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.653 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:34:17.653 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:17.653 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:17.653 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:17.653 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:34:17.653 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:17.653 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:17.653 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:17.653 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.653 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.653 [2024-10-28 13:45:31.585324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:17.653 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.653 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:17.653 "name": "raid_bdev1", 00:34:17.653 "aliases": [ 00:34:17.653 "bd8ad315-ff1b-43a8-977e-7fda9b69ef8f" 00:34:17.653 ], 00:34:17.653 "product_name": "Raid Volume", 00:34:17.653 "block_size": 4096, 00:34:17.653 "num_blocks": 7936, 00:34:17.653 "uuid": "bd8ad315-ff1b-43a8-977e-7fda9b69ef8f", 00:34:17.653 "md_size": 32, 00:34:17.653 "md_interleave": false, 00:34:17.653 "dif_type": 0, 00:34:17.653 "assigned_rate_limits": { 00:34:17.653 "rw_ios_per_sec": 0, 00:34:17.654 "rw_mbytes_per_sec": 0, 00:34:17.654 "r_mbytes_per_sec": 0, 00:34:17.654 "w_mbytes_per_sec": 0 00:34:17.654 }, 00:34:17.654 "claimed": false, 00:34:17.654 "zoned": false, 00:34:17.654 "supported_io_types": { 00:34:17.654 "read": true, 00:34:17.654 "write": true, 00:34:17.654 "unmap": false, 00:34:17.654 "flush": false, 00:34:17.654 "reset": true, 00:34:17.654 "nvme_admin": false, 00:34:17.654 "nvme_io": false, 00:34:17.654 "nvme_io_md": false, 00:34:17.654 "write_zeroes": true, 00:34:17.654 "zcopy": false, 00:34:17.654 "get_zone_info": false, 00:34:17.654 "zone_management": false, 00:34:17.654 "zone_append": false, 00:34:17.654 "compare": false, 00:34:17.654 "compare_and_write": false, 00:34:17.654 "abort": false, 00:34:17.654 "seek_hole": false, 00:34:17.654 "seek_data": false, 00:34:17.654 "copy": false, 00:34:17.654 "nvme_iov_md": false 00:34:17.654 }, 00:34:17.654 "memory_domains": [ 00:34:17.654 { 00:34:17.654 "dma_device_id": "system", 00:34:17.654 "dma_device_type": 1 00:34:17.654 }, 00:34:17.654 { 00:34:17.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:17.654 "dma_device_type": 2 00:34:17.654 }, 00:34:17.654 { 00:34:17.654 "dma_device_id": "system", 00:34:17.654 "dma_device_type": 1 00:34:17.654 }, 00:34:17.654 { 00:34:17.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:17.654 "dma_device_type": 2 00:34:17.654 } 00:34:17.654 ], 00:34:17.654 "driver_specific": { 00:34:17.654 "raid": { 00:34:17.654 "uuid": "bd8ad315-ff1b-43a8-977e-7fda9b69ef8f", 00:34:17.654 "strip_size_kb": 0, 00:34:17.654 "state": "online", 00:34:17.654 "raid_level": "raid1", 00:34:17.654 "superblock": true, 00:34:17.654 "num_base_bdevs": 2, 00:34:17.654 "num_base_bdevs_discovered": 2, 00:34:17.654 "num_base_bdevs_operational": 2, 00:34:17.654 "base_bdevs_list": [ 00:34:17.654 { 00:34:17.654 "name": "pt1", 00:34:17.654 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:17.654 "is_configured": true, 00:34:17.654 "data_offset": 256, 00:34:17.654 "data_size": 7936 00:34:17.654 }, 00:34:17.654 { 00:34:17.654 "name": "pt2", 00:34:17.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:17.654 "is_configured": true, 00:34:17.654 "data_offset": 256, 00:34:17.654 "data_size": 7936 00:34:17.654 } 00:34:17.654 ] 00:34:17.654 } 00:34:17.654 } 00:34:17.654 }' 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:17.654 pt2' 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.654 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.912 [2024-10-28 13:45:31.833246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bd8ad315-ff1b-43a8-977e-7fda9b69ef8f 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z bd8ad315-ff1b-43a8-977e-7fda9b69ef8f ']' 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.912 [2024-10-28 13:45:31.888930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:17.912 [2024-10-28 13:45:31.888973] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:17.912 [2024-10-28 13:45:31.889146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:17.912 [2024-10-28 13:45:31.889274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:17.912 [2024-10-28 13:45:31.889298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:17.912 13:45:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.912 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:34:17.912 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:17.912 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:34:17.912 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:17.912 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:17.912 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.913 [2024-10-28 13:45:32.024978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:17.913 [2024-10-28 13:45:32.027760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:17.913 [2024-10-28 13:45:32.027873] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:17.913 [2024-10-28 13:45:32.027956] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:17.913 [2024-10-28 13:45:32.027981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:17.913 [2024-10-28 13:45:32.027995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:34:17.913 request: 00:34:17.913 { 00:34:17.913 "name": "raid_bdev1", 00:34:17.913 "raid_level": "raid1", 00:34:17.913 "base_bdevs": [ 00:34:17.913 "malloc1", 00:34:17.913 "malloc2" 00:34:17.913 ], 00:34:17.913 "superblock": false, 00:34:17.913 "method": "bdev_raid_create", 00:34:17.913 "req_id": 1 00:34:17.913 } 00:34:17.913 Got JSON-RPC error response 00:34:17.913 response: 00:34:17.913 { 00:34:17.913 "code": -17, 00:34:17.913 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:17.913 } 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:17.913 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.170 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:34:18.170 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:34:18.170 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:18.170 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.170 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.170 [2024-10-28 13:45:32.093054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:18.170 [2024-10-28 13:45:32.093277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:18.170 [2024-10-28 13:45:32.093353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:18.170 [2024-10-28 13:45:32.093485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:18.170 [2024-10-28 13:45:32.096373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:18.170 [2024-10-28 13:45:32.096540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:18.170 [2024-10-28 13:45:32.096712] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:18.170 [2024-10-28 13:45:32.096877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:18.170 pt1 00:34:18.170 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.170 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:34:18.170 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:18.170 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:18.170 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:18.170 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:18.171 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:18.171 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:18.171 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:18.171 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:18.171 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:18.171 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:18.171 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.171 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:18.171 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.171 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.171 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:18.171 "name": "raid_bdev1", 00:34:18.171 "uuid": "bd8ad315-ff1b-43a8-977e-7fda9b69ef8f", 00:34:18.171 "strip_size_kb": 0, 00:34:18.171 "state": "configuring", 00:34:18.171 "raid_level": "raid1", 00:34:18.171 "superblock": true, 00:34:18.171 "num_base_bdevs": 2, 00:34:18.171 "num_base_bdevs_discovered": 1, 00:34:18.171 "num_base_bdevs_operational": 2, 00:34:18.171 "base_bdevs_list": [ 00:34:18.171 { 00:34:18.171 "name": "pt1", 00:34:18.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:18.171 "is_configured": true, 00:34:18.171 "data_offset": 256, 00:34:18.171 "data_size": 7936 00:34:18.171 }, 00:34:18.171 { 00:34:18.171 "name": null, 00:34:18.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:18.171 "is_configured": false, 00:34:18.171 "data_offset": 256, 00:34:18.171 "data_size": 7936 00:34:18.171 } 00:34:18.171 ] 00:34:18.171 }' 00:34:18.171 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:18.171 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.737 [2024-10-28 13:45:32.621528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:18.737 [2024-10-28 13:45:32.621633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:18.737 [2024-10-28 13:45:32.621673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:18.737 [2024-10-28 13:45:32.621691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:18.737 [2024-10-28 13:45:32.621954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:18.737 [2024-10-28 13:45:32.621984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:18.737 [2024-10-28 13:45:32.622066] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:18.737 [2024-10-28 13:45:32.622114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:18.737 [2024-10-28 13:45:32.622277] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:34:18.737 [2024-10-28 13:45:32.622299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:18.737 [2024-10-28 13:45:32.622398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:18.737 [2024-10-28 13:45:32.622572] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:34:18.737 [2024-10-28 13:45:32.622587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:34:18.737 [2024-10-28 13:45:32.622682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:18.737 pt2 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:18.737 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:18.738 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:18.738 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:18.738 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:18.738 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:18.738 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:18.738 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.738 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.738 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.738 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:18.738 "name": "raid_bdev1", 00:34:18.738 "uuid": "bd8ad315-ff1b-43a8-977e-7fda9b69ef8f", 00:34:18.738 "strip_size_kb": 0, 00:34:18.738 "state": "online", 00:34:18.738 "raid_level": "raid1", 00:34:18.738 "superblock": true, 00:34:18.738 "num_base_bdevs": 2, 00:34:18.738 "num_base_bdevs_discovered": 2, 00:34:18.738 "num_base_bdevs_operational": 2, 00:34:18.738 "base_bdevs_list": [ 00:34:18.738 { 00:34:18.738 "name": "pt1", 00:34:18.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:18.738 "is_configured": true, 00:34:18.738 "data_offset": 256, 00:34:18.738 "data_size": 7936 00:34:18.738 }, 00:34:18.738 { 00:34:18.738 "name": "pt2", 00:34:18.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:18.738 "is_configured": true, 00:34:18.738 "data_offset": 256, 00:34:18.738 "data_size": 7936 00:34:18.738 } 00:34:18.738 ] 00:34:18.738 }' 00:34:18.738 13:45:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:18.738 13:45:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.997 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:34:18.997 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:18.997 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:18.997 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:18.997 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:34:18.997 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:18.997 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:18.997 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:18.997 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.997 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:18.997 [2024-10-28 13:45:33.134095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:19.254 "name": "raid_bdev1", 00:34:19.254 "aliases": [ 00:34:19.254 "bd8ad315-ff1b-43a8-977e-7fda9b69ef8f" 00:34:19.254 ], 00:34:19.254 "product_name": "Raid Volume", 00:34:19.254 "block_size": 4096, 00:34:19.254 "num_blocks": 7936, 00:34:19.254 "uuid": "bd8ad315-ff1b-43a8-977e-7fda9b69ef8f", 00:34:19.254 "md_size": 32, 00:34:19.254 "md_interleave": false, 00:34:19.254 "dif_type": 0, 00:34:19.254 "assigned_rate_limits": { 00:34:19.254 "rw_ios_per_sec": 0, 00:34:19.254 "rw_mbytes_per_sec": 0, 00:34:19.254 "r_mbytes_per_sec": 0, 00:34:19.254 "w_mbytes_per_sec": 0 00:34:19.254 }, 00:34:19.254 "claimed": false, 00:34:19.254 "zoned": false, 00:34:19.254 "supported_io_types": { 00:34:19.254 "read": true, 00:34:19.254 "write": true, 00:34:19.254 "unmap": false, 00:34:19.254 "flush": false, 00:34:19.254 "reset": true, 00:34:19.254 "nvme_admin": false, 00:34:19.254 "nvme_io": false, 00:34:19.254 "nvme_io_md": false, 00:34:19.254 "write_zeroes": true, 00:34:19.254 "zcopy": false, 00:34:19.254 "get_zone_info": false, 00:34:19.254 "zone_management": false, 00:34:19.254 "zone_append": false, 00:34:19.254 "compare": false, 00:34:19.254 "compare_and_write": false, 00:34:19.254 "abort": false, 00:34:19.254 "seek_hole": false, 00:34:19.254 "seek_data": false, 00:34:19.254 "copy": false, 00:34:19.254 "nvme_iov_md": false 00:34:19.254 }, 00:34:19.254 "memory_domains": [ 00:34:19.254 { 00:34:19.254 "dma_device_id": "system", 00:34:19.254 "dma_device_type": 1 00:34:19.254 }, 00:34:19.254 { 00:34:19.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:19.254 "dma_device_type": 2 00:34:19.254 }, 00:34:19.254 { 00:34:19.254 "dma_device_id": "system", 00:34:19.254 "dma_device_type": 1 00:34:19.254 }, 00:34:19.254 { 00:34:19.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:19.254 "dma_device_type": 2 00:34:19.254 } 00:34:19.254 ], 00:34:19.254 "driver_specific": { 00:34:19.254 "raid": { 00:34:19.254 "uuid": "bd8ad315-ff1b-43a8-977e-7fda9b69ef8f", 00:34:19.254 "strip_size_kb": 0, 00:34:19.254 "state": "online", 00:34:19.254 "raid_level": "raid1", 00:34:19.254 "superblock": true, 00:34:19.254 "num_base_bdevs": 2, 00:34:19.254 "num_base_bdevs_discovered": 2, 00:34:19.254 "num_base_bdevs_operational": 2, 00:34:19.254 "base_bdevs_list": [ 00:34:19.254 { 00:34:19.254 "name": "pt1", 00:34:19.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:19.254 "is_configured": true, 00:34:19.254 "data_offset": 256, 00:34:19.254 "data_size": 7936 00:34:19.254 }, 00:34:19.254 { 00:34:19.254 "name": "pt2", 00:34:19.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:19.254 "is_configured": true, 00:34:19.254 "data_offset": 256, 00:34:19.254 "data_size": 7936 00:34:19.254 } 00:34:19.254 ] 00:34:19.254 } 00:34:19.254 } 00:34:19.254 }' 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:19.254 pt2' 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:19.254 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:19.513 [2024-10-28 13:45:33.422223] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' bd8ad315-ff1b-43a8-977e-7fda9b69ef8f '!=' bd8ad315-ff1b-43a8-977e-7fda9b69ef8f ']' 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:19.513 [2024-10-28 13:45:33.469860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:19.513 "name": "raid_bdev1", 00:34:19.513 "uuid": "bd8ad315-ff1b-43a8-977e-7fda9b69ef8f", 00:34:19.513 "strip_size_kb": 0, 00:34:19.513 "state": "online", 00:34:19.513 "raid_level": "raid1", 00:34:19.513 "superblock": true, 00:34:19.513 "num_base_bdevs": 2, 00:34:19.513 "num_base_bdevs_discovered": 1, 00:34:19.513 "num_base_bdevs_operational": 1, 00:34:19.513 "base_bdevs_list": [ 00:34:19.513 { 00:34:19.513 "name": null, 00:34:19.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.513 "is_configured": false, 00:34:19.513 "data_offset": 0, 00:34:19.513 "data_size": 7936 00:34:19.513 }, 00:34:19.513 { 00:34:19.513 "name": "pt2", 00:34:19.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:19.513 "is_configured": true, 00:34:19.513 "data_offset": 256, 00:34:19.513 "data_size": 7936 00:34:19.513 } 00:34:19.513 ] 00:34:19.513 }' 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:19.513 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:20.106 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:20.106 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.106 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:20.106 [2024-10-28 13:45:33.974252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:20.106 [2024-10-28 13:45:33.974306] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:20.106 [2024-10-28 13:45:33.974424] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:20.106 [2024-10-28 13:45:33.974492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:20.106 [2024-10-28 13:45:33.974512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:34:20.106 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.106 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:20.106 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.106 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:20.106 13:45:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:34:20.106 13:45:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.106 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:20.106 [2024-10-28 13:45:34.046226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:20.106 [2024-10-28 13:45:34.046450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:20.106 [2024-10-28 13:45:34.046492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:20.106 [2024-10-28 13:45:34.046512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:20.106 [2024-10-28 13:45:34.049575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:20.106 [2024-10-28 13:45:34.049767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:20.106 [2024-10-28 13:45:34.049849] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:20.106 [2024-10-28 13:45:34.049902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:20.106 [2024-10-28 13:45:34.050000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:20.106 [2024-10-28 13:45:34.050019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:20.106 [2024-10-28 13:45:34.050112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:34:20.106 [2024-10-28 13:45:34.050254] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:20.106 [2024-10-28 13:45:34.050270] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:34:20.107 [2024-10-28 13:45:34.050413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:20.107 pt2 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:20.107 "name": "raid_bdev1", 00:34:20.107 "uuid": "bd8ad315-ff1b-43a8-977e-7fda9b69ef8f", 00:34:20.107 "strip_size_kb": 0, 00:34:20.107 "state": "online", 00:34:20.107 "raid_level": "raid1", 00:34:20.107 "superblock": true, 00:34:20.107 "num_base_bdevs": 2, 00:34:20.107 "num_base_bdevs_discovered": 1, 00:34:20.107 "num_base_bdevs_operational": 1, 00:34:20.107 "base_bdevs_list": [ 00:34:20.107 { 00:34:20.107 "name": null, 00:34:20.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:20.107 "is_configured": false, 00:34:20.107 "data_offset": 256, 00:34:20.107 "data_size": 7936 00:34:20.107 }, 00:34:20.107 { 00:34:20.107 "name": "pt2", 00:34:20.107 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:20.107 "is_configured": true, 00:34:20.107 "data_offset": 256, 00:34:20.107 "data_size": 7936 00:34:20.107 } 00:34:20.107 ] 00:34:20.107 }' 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:20.107 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:20.695 [2024-10-28 13:45:34.598805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:20.695 [2024-10-28 13:45:34.598860] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:20.695 [2024-10-28 13:45:34.598967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:20.695 [2024-10-28 13:45:34.599054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:20.695 [2024-10-28 13:45:34.599085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:20.695 [2024-10-28 13:45:34.662722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:20.695 [2024-10-28 13:45:34.662865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:20.695 [2024-10-28 13:45:34.662898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:34:20.695 [2024-10-28 13:45:34.662913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:20.695 [2024-10-28 13:45:34.665985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:20.695 [2024-10-28 13:45:34.666031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:20.695 [2024-10-28 13:45:34.666108] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:20.695 [2024-10-28 13:45:34.666165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:20.695 [2024-10-28 13:45:34.666301] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:34:20.695 [2024-10-28 13:45:34.666318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:20.695 [2024-10-28 13:45:34.666350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:34:20.695 [2024-10-28 13:45:34.666389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:20.695 [2024-10-28 13:45:34.666474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:34:20.695 [2024-10-28 13:45:34.666490] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:20.695 [2024-10-28 13:45:34.666590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:34:20.695 [2024-10-28 13:45:34.666701] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:34:20.695 [2024-10-28 13:45:34.666727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:34:20.695 [2024-10-28 13:45:34.666881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:20.695 pt1 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:20.695 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:20.696 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:20.696 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:20.696 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.696 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:20.696 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.696 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:20.696 "name": "raid_bdev1", 00:34:20.696 "uuid": "bd8ad315-ff1b-43a8-977e-7fda9b69ef8f", 00:34:20.696 "strip_size_kb": 0, 00:34:20.696 "state": "online", 00:34:20.696 "raid_level": "raid1", 00:34:20.696 "superblock": true, 00:34:20.696 "num_base_bdevs": 2, 00:34:20.696 "num_base_bdevs_discovered": 1, 00:34:20.696 "num_base_bdevs_operational": 1, 00:34:20.696 "base_bdevs_list": [ 00:34:20.696 { 00:34:20.696 "name": null, 00:34:20.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:20.696 "is_configured": false, 00:34:20.696 "data_offset": 256, 00:34:20.696 "data_size": 7936 00:34:20.696 }, 00:34:20.696 { 00:34:20.696 "name": "pt2", 00:34:20.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:20.696 "is_configured": true, 00:34:20.696 "data_offset": 256, 00:34:20.696 "data_size": 7936 00:34:20.696 } 00:34:20.696 ] 00:34:20.696 }' 00:34:20.696 13:45:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:20.696 13:45:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:34:21.263 [2024-10-28 13:45:35.251504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' bd8ad315-ff1b-43a8-977e-7fda9b69ef8f '!=' bd8ad315-ff1b-43a8-977e-7fda9b69ef8f ']' 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 100128 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 100128 ']' 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 100128 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100128 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100128' 00:34:21.263 killing process with pid 100128 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 100128 00:34:21.263 [2024-10-28 13:45:35.338618] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:21.263 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 100128 00:34:21.263 [2024-10-28 13:45:35.338935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:21.263 [2024-10-28 13:45:35.339013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:21.263 [2024-10-28 13:45:35.339037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:34:21.263 [2024-10-28 13:45:35.369806] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:21.522 13:45:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:34:21.522 00:34:21.522 real 0m5.754s 00:34:21.522 user 0m9.734s 00:34:21.522 sys 0m0.938s 00:34:21.522 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:21.522 ************************************ 00:34:21.522 END TEST raid_superblock_test_md_separate 00:34:21.522 ************************************ 00:34:21.522 13:45:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:21.781 13:45:35 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:34:21.781 13:45:35 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:34:21.781 13:45:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:34:21.781 13:45:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:21.781 13:45:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:21.781 ************************************ 00:34:21.781 START TEST raid_rebuild_test_sb_md_separate 00:34:21.781 ************************************ 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=100451 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 100451 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 100451 ']' 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:21.781 13:45:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:21.781 [2024-10-28 13:45:35.822338] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:34:21.781 [2024-10-28 13:45:35.822842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100451 ] 00:34:21.781 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:21.781 Zero copy mechanism will not be used. 00:34:22.039 [2024-10-28 13:45:35.980557] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:22.039 [2024-10-28 13:45:36.012709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.039 [2024-10-28 13:45:36.067651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.039 [2024-10-28 13:45:36.133116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:22.039 [2024-10-28 13:45:36.133443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:22.974 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:22.974 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:34:22.974 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:22.974 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:34:22.974 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.974 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.974 BaseBdev1_malloc 00:34:22.974 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.974 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:22.974 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.974 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.974 [2024-10-28 13:45:36.882182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:22.974 [2024-10-28 13:45:36.882305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:22.974 [2024-10-28 13:45:36.882373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:22.974 [2024-10-28 13:45:36.882396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:22.974 [2024-10-28 13:45:36.885725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:22.974 [2024-10-28 13:45:36.885954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:22.974 BaseBdev1 00:34:22.974 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.974 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:22.974 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.975 BaseBdev2_malloc 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.975 [2024-10-28 13:45:36.917567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:22.975 [2024-10-28 13:45:36.917667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:22.975 [2024-10-28 13:45:36.917708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:22.975 [2024-10-28 13:45:36.917741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:22.975 [2024-10-28 13:45:36.920721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:22.975 [2024-10-28 13:45:36.920772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:22.975 BaseBdev2 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.975 spare_malloc 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.975 spare_delay 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.975 [2024-10-28 13:45:36.965048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:22.975 [2024-10-28 13:45:36.965203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:22.975 [2024-10-28 13:45:36.965254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:22.975 [2024-10-28 13:45:36.965288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:22.975 [2024-10-28 13:45:36.968203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:22.975 [2024-10-28 13:45:36.968282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:22.975 spare 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.975 [2024-10-28 13:45:36.977195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:22.975 [2024-10-28 13:45:36.980153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:22.975 [2024-10-28 13:45:36.980404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:34:22.975 [2024-10-28 13:45:36.980461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:22.975 [2024-10-28 13:45:36.980586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:22.975 [2024-10-28 13:45:36.980811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:34:22.975 [2024-10-28 13:45:36.980830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:34:22.975 [2024-10-28 13:45:36.980969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.975 13:45:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:22.975 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.975 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:22.975 "name": "raid_bdev1", 00:34:22.975 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:22.975 "strip_size_kb": 0, 00:34:22.975 "state": "online", 00:34:22.975 "raid_level": "raid1", 00:34:22.975 "superblock": true, 00:34:22.975 "num_base_bdevs": 2, 00:34:22.975 "num_base_bdevs_discovered": 2, 00:34:22.975 "num_base_bdevs_operational": 2, 00:34:22.975 "base_bdevs_list": [ 00:34:22.975 { 00:34:22.975 "name": "BaseBdev1", 00:34:22.975 "uuid": "f78142ab-589c-5868-8637-2be130f27f26", 00:34:22.975 "is_configured": true, 00:34:22.975 "data_offset": 256, 00:34:22.975 "data_size": 7936 00:34:22.975 }, 00:34:22.975 { 00:34:22.975 "name": "BaseBdev2", 00:34:22.975 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:22.975 "is_configured": true, 00:34:22.975 "data_offset": 256, 00:34:22.975 "data_size": 7936 00:34:22.975 } 00:34:22.975 ] 00:34:22.975 }' 00:34:22.975 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:22.975 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:23.541 [2024-10-28 13:45:37.493979] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:23.541 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:34:23.799 [2024-10-28 13:45:37.865809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:34:23.799 /dev/nbd0 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:23.799 1+0 records in 00:34:23.799 1+0 records out 00:34:23.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316709 s, 12.9 MB/s 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:34:23.799 13:45:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:34:25.173 7936+0 records in 00:34:25.173 7936+0 records out 00:34:25.173 32505856 bytes (33 MB, 31 MiB) copied, 1.01728 s, 32.0 MB/s 00:34:25.173 13:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:34:25.173 13:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:25.173 13:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:25.173 13:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:25.173 13:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:34:25.173 13:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:25.173 13:45:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:25.173 [2024-10-28 13:45:39.253656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:25.173 [2024-10-28 13:45:39.273742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:25.173 "name": "raid_bdev1", 00:34:25.173 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:25.173 "strip_size_kb": 0, 00:34:25.173 "state": "online", 00:34:25.173 "raid_level": "raid1", 00:34:25.173 "superblock": true, 00:34:25.173 "num_base_bdevs": 2, 00:34:25.173 "num_base_bdevs_discovered": 1, 00:34:25.173 "num_base_bdevs_operational": 1, 00:34:25.173 "base_bdevs_list": [ 00:34:25.173 { 00:34:25.173 "name": null, 00:34:25.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:25.173 "is_configured": false, 00:34:25.173 "data_offset": 0, 00:34:25.173 "data_size": 7936 00:34:25.173 }, 00:34:25.173 { 00:34:25.173 "name": "BaseBdev2", 00:34:25.173 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:25.173 "is_configured": true, 00:34:25.173 "data_offset": 256, 00:34:25.173 "data_size": 7936 00:34:25.173 } 00:34:25.173 ] 00:34:25.173 }' 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:25.173 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:25.739 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:25.739 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.739 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:25.739 [2024-10-28 13:45:39.769919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:25.739 [2024-10-28 13:45:39.774202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d670 00:34:25.739 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.739 13:45:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:34:25.739 [2024-10-28 13:45:39.777250] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:26.671 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:26.671 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:26.671 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:26.671 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:26.671 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:26.671 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:26.671 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:26.671 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.671 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:26.671 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:26.936 "name": "raid_bdev1", 00:34:26.936 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:26.936 "strip_size_kb": 0, 00:34:26.936 "state": "online", 00:34:26.936 "raid_level": "raid1", 00:34:26.936 "superblock": true, 00:34:26.936 "num_base_bdevs": 2, 00:34:26.936 "num_base_bdevs_discovered": 2, 00:34:26.936 "num_base_bdevs_operational": 2, 00:34:26.936 "process": { 00:34:26.936 "type": "rebuild", 00:34:26.936 "target": "spare", 00:34:26.936 "progress": { 00:34:26.936 "blocks": 2560, 00:34:26.936 "percent": 32 00:34:26.936 } 00:34:26.936 }, 00:34:26.936 "base_bdevs_list": [ 00:34:26.936 { 00:34:26.936 "name": "spare", 00:34:26.936 "uuid": "04bf4d3a-47b2-5692-8073-e72f035db6cd", 00:34:26.936 "is_configured": true, 00:34:26.936 "data_offset": 256, 00:34:26.936 "data_size": 7936 00:34:26.936 }, 00:34:26.936 { 00:34:26.936 "name": "BaseBdev2", 00:34:26.936 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:26.936 "is_configured": true, 00:34:26.936 "data_offset": 256, 00:34:26.936 "data_size": 7936 00:34:26.936 } 00:34:26.936 ] 00:34:26.936 }' 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:26.936 [2024-10-28 13:45:40.956562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:26.936 [2024-10-28 13:45:40.987136] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:26.936 [2024-10-28 13:45:40.987467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:26.936 [2024-10-28 13:45:40.987505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:26.936 [2024-10-28 13:45:40.987524] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:26.936 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:26.937 13:45:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:26.937 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:26.937 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:26.937 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.937 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:26.937 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.937 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:26.937 "name": "raid_bdev1", 00:34:26.937 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:26.937 "strip_size_kb": 0, 00:34:26.937 "state": "online", 00:34:26.937 "raid_level": "raid1", 00:34:26.937 "superblock": true, 00:34:26.937 "num_base_bdevs": 2, 00:34:26.937 "num_base_bdevs_discovered": 1, 00:34:26.937 "num_base_bdevs_operational": 1, 00:34:26.937 "base_bdevs_list": [ 00:34:26.937 { 00:34:26.937 "name": null, 00:34:26.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:26.937 "is_configured": false, 00:34:26.937 "data_offset": 0, 00:34:26.937 "data_size": 7936 00:34:26.937 }, 00:34:26.937 { 00:34:26.937 "name": "BaseBdev2", 00:34:26.937 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:26.937 "is_configured": true, 00:34:26.937 "data_offset": 256, 00:34:26.937 "data_size": 7936 00:34:26.937 } 00:34:26.937 ] 00:34:26.937 }' 00:34:26.937 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:26.937 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:27.537 "name": "raid_bdev1", 00:34:27.537 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:27.537 "strip_size_kb": 0, 00:34:27.537 "state": "online", 00:34:27.537 "raid_level": "raid1", 00:34:27.537 "superblock": true, 00:34:27.537 "num_base_bdevs": 2, 00:34:27.537 "num_base_bdevs_discovered": 1, 00:34:27.537 "num_base_bdevs_operational": 1, 00:34:27.537 "base_bdevs_list": [ 00:34:27.537 { 00:34:27.537 "name": null, 00:34:27.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.537 "is_configured": false, 00:34:27.537 "data_offset": 0, 00:34:27.537 "data_size": 7936 00:34:27.537 }, 00:34:27.537 { 00:34:27.537 "name": "BaseBdev2", 00:34:27.537 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:27.537 "is_configured": true, 00:34:27.537 "data_offset": 256, 00:34:27.537 "data_size": 7936 00:34:27.537 } 00:34:27.537 ] 00:34:27.537 }' 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:27.537 [2024-10-28 13:45:41.676673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:27.537 [2024-10-28 13:45:41.681076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d740 00:34:27.537 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.538 13:45:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:34:27.538 [2024-10-28 13:45:41.683997] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:28.910 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:28.910 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:28.910 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:28.910 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:28.910 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:28.910 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:28.910 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.910 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:28.910 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:28.910 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.910 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:28.910 "name": "raid_bdev1", 00:34:28.910 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:28.910 "strip_size_kb": 0, 00:34:28.911 "state": "online", 00:34:28.911 "raid_level": "raid1", 00:34:28.911 "superblock": true, 00:34:28.911 "num_base_bdevs": 2, 00:34:28.911 "num_base_bdevs_discovered": 2, 00:34:28.911 "num_base_bdevs_operational": 2, 00:34:28.911 "process": { 00:34:28.911 "type": "rebuild", 00:34:28.911 "target": "spare", 00:34:28.911 "progress": { 00:34:28.911 "blocks": 2560, 00:34:28.911 "percent": 32 00:34:28.911 } 00:34:28.911 }, 00:34:28.911 "base_bdevs_list": [ 00:34:28.911 { 00:34:28.911 "name": "spare", 00:34:28.911 "uuid": "04bf4d3a-47b2-5692-8073-e72f035db6cd", 00:34:28.911 "is_configured": true, 00:34:28.911 "data_offset": 256, 00:34:28.911 "data_size": 7936 00:34:28.911 }, 00:34:28.911 { 00:34:28.911 "name": "BaseBdev2", 00:34:28.911 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:28.911 "is_configured": true, 00:34:28.911 "data_offset": 256, 00:34:28.911 "data_size": 7936 00:34:28.911 } 00:34:28.911 ] 00:34:28.911 }' 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:34:28.911 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=676 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:28.911 "name": "raid_bdev1", 00:34:28.911 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:28.911 "strip_size_kb": 0, 00:34:28.911 "state": "online", 00:34:28.911 "raid_level": "raid1", 00:34:28.911 "superblock": true, 00:34:28.911 "num_base_bdevs": 2, 00:34:28.911 "num_base_bdevs_discovered": 2, 00:34:28.911 "num_base_bdevs_operational": 2, 00:34:28.911 "process": { 00:34:28.911 "type": "rebuild", 00:34:28.911 "target": "spare", 00:34:28.911 "progress": { 00:34:28.911 "blocks": 2816, 00:34:28.911 "percent": 35 00:34:28.911 } 00:34:28.911 }, 00:34:28.911 "base_bdevs_list": [ 00:34:28.911 { 00:34:28.911 "name": "spare", 00:34:28.911 "uuid": "04bf4d3a-47b2-5692-8073-e72f035db6cd", 00:34:28.911 "is_configured": true, 00:34:28.911 "data_offset": 256, 00:34:28.911 "data_size": 7936 00:34:28.911 }, 00:34:28.911 { 00:34:28.911 "name": "BaseBdev2", 00:34:28.911 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:28.911 "is_configured": true, 00:34:28.911 "data_offset": 256, 00:34:28.911 "data_size": 7936 00:34:28.911 } 00:34:28.911 ] 00:34:28.911 }' 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:28.911 13:45:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:28.911 13:45:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:28.911 13:45:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:30.285 "name": "raid_bdev1", 00:34:30.285 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:30.285 "strip_size_kb": 0, 00:34:30.285 "state": "online", 00:34:30.285 "raid_level": "raid1", 00:34:30.285 "superblock": true, 00:34:30.285 "num_base_bdevs": 2, 00:34:30.285 "num_base_bdevs_discovered": 2, 00:34:30.285 "num_base_bdevs_operational": 2, 00:34:30.285 "process": { 00:34:30.285 "type": "rebuild", 00:34:30.285 "target": "spare", 00:34:30.285 "progress": { 00:34:30.285 "blocks": 5888, 00:34:30.285 "percent": 74 00:34:30.285 } 00:34:30.285 }, 00:34:30.285 "base_bdevs_list": [ 00:34:30.285 { 00:34:30.285 "name": "spare", 00:34:30.285 "uuid": "04bf4d3a-47b2-5692-8073-e72f035db6cd", 00:34:30.285 "is_configured": true, 00:34:30.285 "data_offset": 256, 00:34:30.285 "data_size": 7936 00:34:30.285 }, 00:34:30.285 { 00:34:30.285 "name": "BaseBdev2", 00:34:30.285 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:30.285 "is_configured": true, 00:34:30.285 "data_offset": 256, 00:34:30.285 "data_size": 7936 00:34:30.285 } 00:34:30.285 ] 00:34:30.285 }' 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:30.285 13:45:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:30.851 [2024-10-28 13:45:44.808599] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:30.851 [2024-10-28 13:45:44.808690] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:30.851 [2024-10-28 13:45:44.808865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:31.109 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:31.109 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:31.109 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:31.109 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:31.109 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:31.109 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:31.109 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:31.109 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.109 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:31.109 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:31.109 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.109 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:31.109 "name": "raid_bdev1", 00:34:31.109 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:31.109 "strip_size_kb": 0, 00:34:31.109 "state": "online", 00:34:31.109 "raid_level": "raid1", 00:34:31.109 "superblock": true, 00:34:31.109 "num_base_bdevs": 2, 00:34:31.109 "num_base_bdevs_discovered": 2, 00:34:31.109 "num_base_bdevs_operational": 2, 00:34:31.109 "base_bdevs_list": [ 00:34:31.109 { 00:34:31.109 "name": "spare", 00:34:31.109 "uuid": "04bf4d3a-47b2-5692-8073-e72f035db6cd", 00:34:31.109 "is_configured": true, 00:34:31.109 "data_offset": 256, 00:34:31.109 "data_size": 7936 00:34:31.109 }, 00:34:31.109 { 00:34:31.109 "name": "BaseBdev2", 00:34:31.109 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:31.109 "is_configured": true, 00:34:31.109 "data_offset": 256, 00:34:31.109 "data_size": 7936 00:34:31.109 } 00:34:31.109 ] 00:34:31.109 }' 00:34:31.109 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:31.368 "name": "raid_bdev1", 00:34:31.368 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:31.368 "strip_size_kb": 0, 00:34:31.368 "state": "online", 00:34:31.368 "raid_level": "raid1", 00:34:31.368 "superblock": true, 00:34:31.368 "num_base_bdevs": 2, 00:34:31.368 "num_base_bdevs_discovered": 2, 00:34:31.368 "num_base_bdevs_operational": 2, 00:34:31.368 "base_bdevs_list": [ 00:34:31.368 { 00:34:31.368 "name": "spare", 00:34:31.368 "uuid": "04bf4d3a-47b2-5692-8073-e72f035db6cd", 00:34:31.368 "is_configured": true, 00:34:31.368 "data_offset": 256, 00:34:31.368 "data_size": 7936 00:34:31.368 }, 00:34:31.368 { 00:34:31.368 "name": "BaseBdev2", 00:34:31.368 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:31.368 "is_configured": true, 00:34:31.368 "data_offset": 256, 00:34:31.368 "data_size": 7936 00:34:31.368 } 00:34:31.368 ] 00:34:31.368 }' 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.368 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:31.626 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.626 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:31.626 "name": "raid_bdev1", 00:34:31.626 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:31.626 "strip_size_kb": 0, 00:34:31.626 "state": "online", 00:34:31.626 "raid_level": "raid1", 00:34:31.626 "superblock": true, 00:34:31.626 "num_base_bdevs": 2, 00:34:31.626 "num_base_bdevs_discovered": 2, 00:34:31.626 "num_base_bdevs_operational": 2, 00:34:31.626 "base_bdevs_list": [ 00:34:31.626 { 00:34:31.626 "name": "spare", 00:34:31.626 "uuid": "04bf4d3a-47b2-5692-8073-e72f035db6cd", 00:34:31.626 "is_configured": true, 00:34:31.626 "data_offset": 256, 00:34:31.626 "data_size": 7936 00:34:31.626 }, 00:34:31.626 { 00:34:31.626 "name": "BaseBdev2", 00:34:31.626 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:31.626 "is_configured": true, 00:34:31.626 "data_offset": 256, 00:34:31.626 "data_size": 7936 00:34:31.626 } 00:34:31.626 ] 00:34:31.626 }' 00:34:31.626 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:31.626 13:45:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:32.194 [2024-10-28 13:45:46.069961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:32.194 [2024-10-28 13:45:46.070264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:32.194 [2024-10-28 13:45:46.070413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:32.194 [2024-10-28 13:45:46.070522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:32.194 [2024-10-28 13:45:46.070541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:32.194 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:32.452 /dev/nbd0 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:32.452 1+0 records in 00:34:32.452 1+0 records out 00:34:32.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322865 s, 12.7 MB/s 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:32.452 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:34:32.712 /dev/nbd1 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:32.712 1+0 records in 00:34:32.712 1+0 records out 00:34:32.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374331 s, 10.9 MB/s 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:32.712 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:34:32.973 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:34:32.973 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:32.973 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:32.973 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:32.973 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:34:32.973 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:32.973 13:45:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:33.231 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:33.231 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:33.231 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:33.231 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:33.231 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:33.231 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:33.231 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:34:33.231 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:34:33.231 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:33.231 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:33.489 [2024-10-28 13:45:47.509314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:33.489 [2024-10-28 13:45:47.509394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:33.489 [2024-10-28 13:45:47.509445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:33.489 [2024-10-28 13:45:47.509459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:33.489 [2024-10-28 13:45:47.512369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:33.489 [2024-10-28 13:45:47.512429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:33.489 [2024-10-28 13:45:47.512533] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:33.489 [2024-10-28 13:45:47.512597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:33.489 [2024-10-28 13:45:47.512736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:33.489 spare 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:33.489 [2024-10-28 13:45:47.612850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:33.489 [2024-10-28 13:45:47.612885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:33.489 [2024-10-28 13:45:47.613010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:34:33.489 [2024-10-28 13:45:47.613189] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:33.489 [2024-10-28 13:45:47.613206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:33.489 [2024-10-28 13:45:47.613354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.489 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:33.490 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:33.490 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:33.490 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:33.490 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:33.490 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:33.490 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:33.490 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:33.490 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:33.490 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:33.490 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:33.490 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:33.490 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.490 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:33.490 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.748 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:33.748 "name": "raid_bdev1", 00:34:33.748 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:33.748 "strip_size_kb": 0, 00:34:33.748 "state": "online", 00:34:33.748 "raid_level": "raid1", 00:34:33.748 "superblock": true, 00:34:33.748 "num_base_bdevs": 2, 00:34:33.748 "num_base_bdevs_discovered": 2, 00:34:33.748 "num_base_bdevs_operational": 2, 00:34:33.748 "base_bdevs_list": [ 00:34:33.748 { 00:34:33.748 "name": "spare", 00:34:33.748 "uuid": "04bf4d3a-47b2-5692-8073-e72f035db6cd", 00:34:33.748 "is_configured": true, 00:34:33.748 "data_offset": 256, 00:34:33.748 "data_size": 7936 00:34:33.748 }, 00:34:33.748 { 00:34:33.748 "name": "BaseBdev2", 00:34:33.748 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:33.748 "is_configured": true, 00:34:33.748 "data_offset": 256, 00:34:33.748 "data_size": 7936 00:34:33.748 } 00:34:33.748 ] 00:34:33.748 }' 00:34:33.748 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:33.748 13:45:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:34.007 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:34.007 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:34.007 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:34.007 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:34.007 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:34.007 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:34.007 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.007 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:34.007 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:34.007 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.007 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:34.007 "name": "raid_bdev1", 00:34:34.007 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:34.007 "strip_size_kb": 0, 00:34:34.007 "state": "online", 00:34:34.007 "raid_level": "raid1", 00:34:34.007 "superblock": true, 00:34:34.007 "num_base_bdevs": 2, 00:34:34.007 "num_base_bdevs_discovered": 2, 00:34:34.265 "num_base_bdevs_operational": 2, 00:34:34.265 "base_bdevs_list": [ 00:34:34.265 { 00:34:34.265 "name": "spare", 00:34:34.265 "uuid": "04bf4d3a-47b2-5692-8073-e72f035db6cd", 00:34:34.265 "is_configured": true, 00:34:34.265 "data_offset": 256, 00:34:34.265 "data_size": 7936 00:34:34.265 }, 00:34:34.265 { 00:34:34.265 "name": "BaseBdev2", 00:34:34.265 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:34.265 "is_configured": true, 00:34:34.265 "data_offset": 256, 00:34:34.265 "data_size": 7936 00:34:34.265 } 00:34:34.265 ] 00:34:34.265 }' 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:34.265 [2024-10-28 13:45:48.325807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:34.265 "name": "raid_bdev1", 00:34:34.265 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:34.265 "strip_size_kb": 0, 00:34:34.265 "state": "online", 00:34:34.265 "raid_level": "raid1", 00:34:34.265 "superblock": true, 00:34:34.265 "num_base_bdevs": 2, 00:34:34.265 "num_base_bdevs_discovered": 1, 00:34:34.265 "num_base_bdevs_operational": 1, 00:34:34.265 "base_bdevs_list": [ 00:34:34.265 { 00:34:34.265 "name": null, 00:34:34.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:34.265 "is_configured": false, 00:34:34.265 "data_offset": 0, 00:34:34.265 "data_size": 7936 00:34:34.265 }, 00:34:34.265 { 00:34:34.265 "name": "BaseBdev2", 00:34:34.265 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:34.265 "is_configured": true, 00:34:34.265 "data_offset": 256, 00:34:34.265 "data_size": 7936 00:34:34.265 } 00:34:34.265 ] 00:34:34.265 }' 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:34.265 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:34.833 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:34.833 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.833 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:34.833 [2024-10-28 13:45:48.854159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:34.833 [2024-10-28 13:45:48.854719] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:34.833 [2024-10-28 13:45:48.854758] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:34.833 [2024-10-28 13:45:48.854856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:34.833 [2024-10-28 13:45:48.858766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2030 00:34:34.833 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.833 13:45:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:34:34.833 [2024-10-28 13:45:48.861697] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:35.769 13:45:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:35.769 13:45:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:35.769 13:45:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:35.769 13:45:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:35.769 13:45:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:35.769 13:45:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:35.769 13:45:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:35.769 13:45:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.769 13:45:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:35.769 13:45:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.769 13:45:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:35.769 "name": "raid_bdev1", 00:34:35.769 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:35.769 "strip_size_kb": 0, 00:34:35.769 "state": "online", 00:34:35.769 "raid_level": "raid1", 00:34:35.769 "superblock": true, 00:34:35.769 "num_base_bdevs": 2, 00:34:35.769 "num_base_bdevs_discovered": 2, 00:34:35.769 "num_base_bdevs_operational": 2, 00:34:35.769 "process": { 00:34:35.769 "type": "rebuild", 00:34:35.769 "target": "spare", 00:34:35.769 "progress": { 00:34:35.769 "blocks": 2560, 00:34:35.769 "percent": 32 00:34:35.769 } 00:34:35.769 }, 00:34:35.769 "base_bdevs_list": [ 00:34:35.769 { 00:34:35.769 "name": "spare", 00:34:35.769 "uuid": "04bf4d3a-47b2-5692-8073-e72f035db6cd", 00:34:35.769 "is_configured": true, 00:34:35.769 "data_offset": 256, 00:34:35.769 "data_size": 7936 00:34:35.769 }, 00:34:35.769 { 00:34:35.769 "name": "BaseBdev2", 00:34:35.769 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:35.769 "is_configured": true, 00:34:35.769 "data_offset": 256, 00:34:35.769 "data_size": 7936 00:34:35.769 } 00:34:35.769 ] 00:34:35.769 }' 00:34:36.029 13:45:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:36.029 13:45:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:36.029 13:45:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:36.029 [2024-10-28 13:45:50.040966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:36.029 [2024-10-28 13:45:50.070891] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:36.029 [2024-10-28 13:45:50.071192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:36.029 [2024-10-28 13:45:50.071223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:36.029 [2024-10-28 13:45:50.071241] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:36.029 "name": "raid_bdev1", 00:34:36.029 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:36.029 "strip_size_kb": 0, 00:34:36.029 "state": "online", 00:34:36.029 "raid_level": "raid1", 00:34:36.029 "superblock": true, 00:34:36.029 "num_base_bdevs": 2, 00:34:36.029 "num_base_bdevs_discovered": 1, 00:34:36.029 "num_base_bdevs_operational": 1, 00:34:36.029 "base_bdevs_list": [ 00:34:36.029 { 00:34:36.029 "name": null, 00:34:36.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:36.029 "is_configured": false, 00:34:36.029 "data_offset": 0, 00:34:36.029 "data_size": 7936 00:34:36.029 }, 00:34:36.029 { 00:34:36.029 "name": "BaseBdev2", 00:34:36.029 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:36.029 "is_configured": true, 00:34:36.029 "data_offset": 256, 00:34:36.029 "data_size": 7936 00:34:36.029 } 00:34:36.029 ] 00:34:36.029 }' 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:36.029 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:36.598 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:36.598 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.598 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:36.598 [2024-10-28 13:45:50.635657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:36.598 [2024-10-28 13:45:50.635742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:36.598 [2024-10-28 13:45:50.635789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:34:36.598 [2024-10-28 13:45:50.635821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:36.598 [2024-10-28 13:45:50.636167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:36.598 [2024-10-28 13:45:50.636201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:36.598 [2024-10-28 13:45:50.636301] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:36.598 [2024-10-28 13:45:50.636330] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:36.598 [2024-10-28 13:45:50.636345] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:36.598 [2024-10-28 13:45:50.636378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:36.598 [2024-10-28 13:45:50.640111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:34:36.598 spare 00:34:36.598 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.598 13:45:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:34:36.598 [2024-10-28 13:45:50.642888] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:37.534 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:37.534 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:37.534 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:37.534 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:37.534 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:37.534 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:37.534 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.534 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:37.534 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:37.534 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.793 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:37.793 "name": "raid_bdev1", 00:34:37.793 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:37.793 "strip_size_kb": 0, 00:34:37.793 "state": "online", 00:34:37.793 "raid_level": "raid1", 00:34:37.793 "superblock": true, 00:34:37.793 "num_base_bdevs": 2, 00:34:37.793 "num_base_bdevs_discovered": 2, 00:34:37.793 "num_base_bdevs_operational": 2, 00:34:37.793 "process": { 00:34:37.793 "type": "rebuild", 00:34:37.793 "target": "spare", 00:34:37.793 "progress": { 00:34:37.793 "blocks": 2560, 00:34:37.793 "percent": 32 00:34:37.793 } 00:34:37.793 }, 00:34:37.793 "base_bdevs_list": [ 00:34:37.793 { 00:34:37.793 "name": "spare", 00:34:37.793 "uuid": "04bf4d3a-47b2-5692-8073-e72f035db6cd", 00:34:37.793 "is_configured": true, 00:34:37.793 "data_offset": 256, 00:34:37.793 "data_size": 7936 00:34:37.793 }, 00:34:37.793 { 00:34:37.793 "name": "BaseBdev2", 00:34:37.793 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:37.793 "is_configured": true, 00:34:37.793 "data_offset": 256, 00:34:37.793 "data_size": 7936 00:34:37.793 } 00:34:37.793 ] 00:34:37.793 }' 00:34:37.793 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:37.793 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:37.793 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:37.793 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:37.793 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:34:37.793 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.793 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:37.793 [2024-10-28 13:45:51.810184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:37.793 [2024-10-28 13:45:51.852194] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:37.793 [2024-10-28 13:45:51.852553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:37.793 [2024-10-28 13:45:51.852598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:37.793 [2024-10-28 13:45:51.852614] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:37.793 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.793 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:37.793 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:37.793 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:37.794 "name": "raid_bdev1", 00:34:37.794 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:37.794 "strip_size_kb": 0, 00:34:37.794 "state": "online", 00:34:37.794 "raid_level": "raid1", 00:34:37.794 "superblock": true, 00:34:37.794 "num_base_bdevs": 2, 00:34:37.794 "num_base_bdevs_discovered": 1, 00:34:37.794 "num_base_bdevs_operational": 1, 00:34:37.794 "base_bdevs_list": [ 00:34:37.794 { 00:34:37.794 "name": null, 00:34:37.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:37.794 "is_configured": false, 00:34:37.794 "data_offset": 0, 00:34:37.794 "data_size": 7936 00:34:37.794 }, 00:34:37.794 { 00:34:37.794 "name": "BaseBdev2", 00:34:37.794 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:37.794 "is_configured": true, 00:34:37.794 "data_offset": 256, 00:34:37.794 "data_size": 7936 00:34:37.794 } 00:34:37.794 ] 00:34:37.794 }' 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:37.794 13:45:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:38.362 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:38.362 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:38.362 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:38.362 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:38.362 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:38.362 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.362 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:38.362 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.362 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:38.362 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.362 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:38.362 "name": "raid_bdev1", 00:34:38.362 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:38.362 "strip_size_kb": 0, 00:34:38.362 "state": "online", 00:34:38.362 "raid_level": "raid1", 00:34:38.362 "superblock": true, 00:34:38.362 "num_base_bdevs": 2, 00:34:38.362 "num_base_bdevs_discovered": 1, 00:34:38.362 "num_base_bdevs_operational": 1, 00:34:38.362 "base_bdevs_list": [ 00:34:38.362 { 00:34:38.362 "name": null, 00:34:38.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:38.362 "is_configured": false, 00:34:38.362 "data_offset": 0, 00:34:38.362 "data_size": 7936 00:34:38.362 }, 00:34:38.362 { 00:34:38.362 "name": "BaseBdev2", 00:34:38.362 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:38.362 "is_configured": true, 00:34:38.362 "data_offset": 256, 00:34:38.362 "data_size": 7936 00:34:38.362 } 00:34:38.362 ] 00:34:38.362 }' 00:34:38.362 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:38.362 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:38.362 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:38.621 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:38.621 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:34:38.621 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.621 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:38.621 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.621 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:38.621 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.621 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:38.621 [2024-10-28 13:45:52.573755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:38.621 [2024-10-28 13:45:52.573876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:38.621 [2024-10-28 13:45:52.573908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:34:38.621 [2024-10-28 13:45:52.573922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:38.621 [2024-10-28 13:45:52.574181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:38.621 [2024-10-28 13:45:52.574218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:38.621 [2024-10-28 13:45:52.574291] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:38.621 [2024-10-28 13:45:52.574310] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:38.621 [2024-10-28 13:45:52.574323] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:38.621 [2024-10-28 13:45:52.574336] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:34:38.621 BaseBdev1 00:34:38.621 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.621 13:45:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:39.605 "name": "raid_bdev1", 00:34:39.605 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:39.605 "strip_size_kb": 0, 00:34:39.605 "state": "online", 00:34:39.605 "raid_level": "raid1", 00:34:39.605 "superblock": true, 00:34:39.605 "num_base_bdevs": 2, 00:34:39.605 "num_base_bdevs_discovered": 1, 00:34:39.605 "num_base_bdevs_operational": 1, 00:34:39.605 "base_bdevs_list": [ 00:34:39.605 { 00:34:39.605 "name": null, 00:34:39.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:39.605 "is_configured": false, 00:34:39.605 "data_offset": 0, 00:34:39.605 "data_size": 7936 00:34:39.605 }, 00:34:39.605 { 00:34:39.605 "name": "BaseBdev2", 00:34:39.605 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:39.605 "is_configured": true, 00:34:39.605 "data_offset": 256, 00:34:39.605 "data_size": 7936 00:34:39.605 } 00:34:39.605 ] 00:34:39.605 }' 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:39.605 13:45:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:40.172 "name": "raid_bdev1", 00:34:40.172 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:40.172 "strip_size_kb": 0, 00:34:40.172 "state": "online", 00:34:40.172 "raid_level": "raid1", 00:34:40.172 "superblock": true, 00:34:40.172 "num_base_bdevs": 2, 00:34:40.172 "num_base_bdevs_discovered": 1, 00:34:40.172 "num_base_bdevs_operational": 1, 00:34:40.172 "base_bdevs_list": [ 00:34:40.172 { 00:34:40.172 "name": null, 00:34:40.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.172 "is_configured": false, 00:34:40.172 "data_offset": 0, 00:34:40.172 "data_size": 7936 00:34:40.172 }, 00:34:40.172 { 00:34:40.172 "name": "BaseBdev2", 00:34:40.172 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:40.172 "is_configured": true, 00:34:40.172 "data_offset": 256, 00:34:40.172 "data_size": 7936 00:34:40.172 } 00:34:40.172 ] 00:34:40.172 }' 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.172 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:40.172 [2024-10-28 13:45:54.290359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:40.172 [2024-10-28 13:45:54.290805] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:40.172 [2024-10-28 13:45:54.291015] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:40.172 request: 00:34:40.172 { 00:34:40.173 "base_bdev": "BaseBdev1", 00:34:40.173 "raid_bdev": "raid_bdev1", 00:34:40.173 "method": "bdev_raid_add_base_bdev", 00:34:40.173 "req_id": 1 00:34:40.173 } 00:34:40.173 Got JSON-RPC error response 00:34:40.173 response: 00:34:40.173 { 00:34:40.173 "code": -22, 00:34:40.173 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:40.173 } 00:34:40.173 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:40.173 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:34:40.173 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:40.173 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:40.173 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:40.173 13:45:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.547 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:41.547 "name": "raid_bdev1", 00:34:41.547 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:41.547 "strip_size_kb": 0, 00:34:41.547 "state": "online", 00:34:41.547 "raid_level": "raid1", 00:34:41.547 "superblock": true, 00:34:41.547 "num_base_bdevs": 2, 00:34:41.547 "num_base_bdevs_discovered": 1, 00:34:41.547 "num_base_bdevs_operational": 1, 00:34:41.547 "base_bdevs_list": [ 00:34:41.547 { 00:34:41.547 "name": null, 00:34:41.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:41.547 "is_configured": false, 00:34:41.547 "data_offset": 0, 00:34:41.547 "data_size": 7936 00:34:41.547 }, 00:34:41.547 { 00:34:41.547 "name": "BaseBdev2", 00:34:41.547 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:41.547 "is_configured": true, 00:34:41.547 "data_offset": 256, 00:34:41.547 "data_size": 7936 00:34:41.547 } 00:34:41.547 ] 00:34:41.547 }' 00:34:41.548 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:41.548 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:41.830 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:41.830 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:41.830 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:41.830 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:41.830 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:41.830 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:41.830 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.830 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:41.830 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:41.830 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.830 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:41.830 "name": "raid_bdev1", 00:34:41.830 "uuid": "7eb2d78e-b6d3-4d02-a811-4f237917a058", 00:34:41.830 "strip_size_kb": 0, 00:34:41.830 "state": "online", 00:34:41.830 "raid_level": "raid1", 00:34:41.830 "superblock": true, 00:34:41.830 "num_base_bdevs": 2, 00:34:41.830 "num_base_bdevs_discovered": 1, 00:34:41.830 "num_base_bdevs_operational": 1, 00:34:41.830 "base_bdevs_list": [ 00:34:41.830 { 00:34:41.830 "name": null, 00:34:41.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:41.830 "is_configured": false, 00:34:41.830 "data_offset": 0, 00:34:41.830 "data_size": 7936 00:34:41.830 }, 00:34:41.830 { 00:34:41.830 "name": "BaseBdev2", 00:34:41.830 "uuid": "c16b7458-8a7e-5e44-9368-a630405dcf44", 00:34:41.830 "is_configured": true, 00:34:41.830 "data_offset": 256, 00:34:41.830 "data_size": 7936 00:34:41.830 } 00:34:41.830 ] 00:34:41.830 }' 00:34:41.830 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:41.830 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:41.830 13:45:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:42.088 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:42.088 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 100451 00:34:42.088 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 100451 ']' 00:34:42.088 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 100451 00:34:42.088 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:34:42.088 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:42.088 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100451 00:34:42.088 killing process with pid 100451 00:34:42.088 Received shutdown signal, test time was about 60.000000 seconds 00:34:42.088 00:34:42.088 Latency(us) 00:34:42.088 [2024-10-28T13:45:56.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.088 [2024-10-28T13:45:56.248Z] =================================================================================================================== 00:34:42.088 [2024-10-28T13:45:56.249Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:42.089 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:42.089 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:42.089 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100451' 00:34:42.089 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 100451 00:34:42.089 [2024-10-28 13:45:56.047240] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:42.089 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 100451 00:34:42.089 [2024-10-28 13:45:56.047393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:42.089 [2024-10-28 13:45:56.047482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:42.089 [2024-10-28 13:45:56.047501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:42.089 [2024-10-28 13:45:56.079965] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:42.347 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:34:42.347 00:34:42.347 real 0m20.603s 00:34:42.347 user 0m28.292s 00:34:42.347 sys 0m2.758s 00:34:42.347 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:42.347 ************************************ 00:34:42.347 13:45:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:34:42.347 END TEST raid_rebuild_test_sb_md_separate 00:34:42.347 ************************************ 00:34:42.347 13:45:56 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:34:42.347 13:45:56 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:34:42.347 13:45:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:34:42.347 13:45:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:42.347 13:45:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:42.347 ************************************ 00:34:42.347 START TEST raid_state_function_test_sb_md_interleaved 00:34:42.347 ************************************ 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=101142 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 101142' 00:34:42.347 Process raid pid: 101142 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 101142 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 101142 ']' 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:42.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:42.347 13:45:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:42.347 [2024-10-28 13:45:56.498161] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:34:42.347 [2024-10-28 13:45:56.498388] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:42.606 [2024-10-28 13:45:56.660473] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:42.606 [2024-10-28 13:45:56.688537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:42.606 [2024-10-28 13:45:56.733675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:42.865 [2024-10-28 13:45:56.793905] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:42.865 [2024-10-28 13:45:56.793936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.433 [2024-10-28 13:45:57.473495] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:43.433 [2024-10-28 13:45:57.473589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:43.433 [2024-10-28 13:45:57.473616] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:43.433 [2024-10-28 13:45:57.473630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.433 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:43.434 "name": "Existed_Raid", 00:34:43.434 "uuid": "3f2f8864-3a6c-4ed8-9132-bf86483be862", 00:34:43.434 "strip_size_kb": 0, 00:34:43.434 "state": "configuring", 00:34:43.434 "raid_level": "raid1", 00:34:43.434 "superblock": true, 00:34:43.434 "num_base_bdevs": 2, 00:34:43.434 "num_base_bdevs_discovered": 0, 00:34:43.434 "num_base_bdevs_operational": 2, 00:34:43.434 "base_bdevs_list": [ 00:34:43.434 { 00:34:43.434 "name": "BaseBdev1", 00:34:43.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:43.434 "is_configured": false, 00:34:43.434 "data_offset": 0, 00:34:43.434 "data_size": 0 00:34:43.434 }, 00:34:43.434 { 00:34:43.434 "name": "BaseBdev2", 00:34:43.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:43.434 "is_configured": false, 00:34:43.434 "data_offset": 0, 00:34:43.434 "data_size": 0 00:34:43.434 } 00:34:43.434 ] 00:34:43.434 }' 00:34:43.434 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:43.434 13:45:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.999 [2024-10-28 13:45:58.017612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:43.999 [2024-10-28 13:45:58.017649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.999 [2024-10-28 13:45:58.029614] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:43.999 [2024-10-28 13:45:58.029838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:43.999 [2024-10-28 13:45:58.029987] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:43.999 [2024-10-28 13:45:58.030174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.999 [2024-10-28 13:45:58.053733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:43.999 BaseBdev1 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.999 [ 00:34:43.999 { 00:34:43.999 "name": "BaseBdev1", 00:34:43.999 "aliases": [ 00:34:43.999 "97f7d3da-3dcb-41e9-9830-7f79304df6fd" 00:34:43.999 ], 00:34:43.999 "product_name": "Malloc disk", 00:34:43.999 "block_size": 4128, 00:34:43.999 "num_blocks": 8192, 00:34:43.999 "uuid": "97f7d3da-3dcb-41e9-9830-7f79304df6fd", 00:34:43.999 "md_size": 32, 00:34:43.999 "md_interleave": true, 00:34:43.999 "dif_type": 0, 00:34:43.999 "assigned_rate_limits": { 00:34:43.999 "rw_ios_per_sec": 0, 00:34:43.999 "rw_mbytes_per_sec": 0, 00:34:43.999 "r_mbytes_per_sec": 0, 00:34:43.999 "w_mbytes_per_sec": 0 00:34:43.999 }, 00:34:43.999 "claimed": true, 00:34:43.999 "claim_type": "exclusive_write", 00:34:43.999 "zoned": false, 00:34:43.999 "supported_io_types": { 00:34:43.999 "read": true, 00:34:43.999 "write": true, 00:34:43.999 "unmap": true, 00:34:43.999 "flush": true, 00:34:43.999 "reset": true, 00:34:43.999 "nvme_admin": false, 00:34:43.999 "nvme_io": false, 00:34:43.999 "nvme_io_md": false, 00:34:43.999 "write_zeroes": true, 00:34:43.999 "zcopy": true, 00:34:43.999 "get_zone_info": false, 00:34:43.999 "zone_management": false, 00:34:43.999 "zone_append": false, 00:34:43.999 "compare": false, 00:34:43.999 "compare_and_write": false, 00:34:43.999 "abort": true, 00:34:43.999 "seek_hole": false, 00:34:43.999 "seek_data": false, 00:34:43.999 "copy": true, 00:34:43.999 "nvme_iov_md": false 00:34:43.999 }, 00:34:43.999 "memory_domains": [ 00:34:43.999 { 00:34:43.999 "dma_device_id": "system", 00:34:43.999 "dma_device_type": 1 00:34:43.999 }, 00:34:43.999 { 00:34:43.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:43.999 "dma_device_type": 2 00:34:43.999 } 00:34:43.999 ], 00:34:43.999 "driver_specific": {} 00:34:43.999 } 00:34:43.999 ] 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.999 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:44.000 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.000 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.000 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:44.000 "name": "Existed_Raid", 00:34:44.000 "uuid": "3a808875-ff64-4b4a-ae8c-a55cd4bfd611", 00:34:44.000 "strip_size_kb": 0, 00:34:44.000 "state": "configuring", 00:34:44.000 "raid_level": "raid1", 00:34:44.000 "superblock": true, 00:34:44.000 "num_base_bdevs": 2, 00:34:44.000 "num_base_bdevs_discovered": 1, 00:34:44.000 "num_base_bdevs_operational": 2, 00:34:44.000 "base_bdevs_list": [ 00:34:44.000 { 00:34:44.000 "name": "BaseBdev1", 00:34:44.000 "uuid": "97f7d3da-3dcb-41e9-9830-7f79304df6fd", 00:34:44.000 "is_configured": true, 00:34:44.000 "data_offset": 256, 00:34:44.000 "data_size": 7936 00:34:44.000 }, 00:34:44.000 { 00:34:44.000 "name": "BaseBdev2", 00:34:44.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:44.000 "is_configured": false, 00:34:44.000 "data_offset": 0, 00:34:44.000 "data_size": 0 00:34:44.000 } 00:34:44.000 ] 00:34:44.000 }' 00:34:44.000 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:44.000 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.566 [2024-10-28 13:45:58.617917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:44.566 [2024-10-28 13:45:58.617980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.566 [2024-10-28 13:45:58.625994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:44.566 [2024-10-28 13:45:58.628849] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:44.566 [2024-10-28 13:45:58.629070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:44.566 "name": "Existed_Raid", 00:34:44.566 "uuid": "af3900f3-0941-44e1-89cb-110f6ffaa8bb", 00:34:44.566 "strip_size_kb": 0, 00:34:44.566 "state": "configuring", 00:34:44.566 "raid_level": "raid1", 00:34:44.566 "superblock": true, 00:34:44.566 "num_base_bdevs": 2, 00:34:44.566 "num_base_bdevs_discovered": 1, 00:34:44.566 "num_base_bdevs_operational": 2, 00:34:44.566 "base_bdevs_list": [ 00:34:44.566 { 00:34:44.566 "name": "BaseBdev1", 00:34:44.566 "uuid": "97f7d3da-3dcb-41e9-9830-7f79304df6fd", 00:34:44.566 "is_configured": true, 00:34:44.566 "data_offset": 256, 00:34:44.566 "data_size": 7936 00:34:44.566 }, 00:34:44.566 { 00:34:44.566 "name": "BaseBdev2", 00:34:44.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:44.566 "is_configured": false, 00:34:44.566 "data_offset": 0, 00:34:44.566 "data_size": 0 00:34:44.566 } 00:34:44.566 ] 00:34:44.566 }' 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:44.566 13:45:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.139 [2024-10-28 13:45:59.164937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:45.139 BaseBdev2 00:34:45.139 [2024-10-28 13:45:59.165425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:34:45.139 [2024-10-28 13:45:59.165477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:45.139 [2024-10-28 13:45:59.165603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:34:45.139 [2024-10-28 13:45:59.165714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:34:45.139 [2024-10-28 13:45:59.165730] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:34:45.139 [2024-10-28 13:45:59.165871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.139 [ 00:34:45.139 { 00:34:45.139 "name": "BaseBdev2", 00:34:45.139 "aliases": [ 00:34:45.139 "38ad676f-7e26-4fb9-8981-a93b590df801" 00:34:45.139 ], 00:34:45.139 "product_name": "Malloc disk", 00:34:45.139 "block_size": 4128, 00:34:45.139 "num_blocks": 8192, 00:34:45.139 "uuid": "38ad676f-7e26-4fb9-8981-a93b590df801", 00:34:45.139 "md_size": 32, 00:34:45.139 "md_interleave": true, 00:34:45.139 "dif_type": 0, 00:34:45.139 "assigned_rate_limits": { 00:34:45.139 "rw_ios_per_sec": 0, 00:34:45.139 "rw_mbytes_per_sec": 0, 00:34:45.139 "r_mbytes_per_sec": 0, 00:34:45.139 "w_mbytes_per_sec": 0 00:34:45.139 }, 00:34:45.139 "claimed": true, 00:34:45.139 "claim_type": "exclusive_write", 00:34:45.139 "zoned": false, 00:34:45.139 "supported_io_types": { 00:34:45.139 "read": true, 00:34:45.139 "write": true, 00:34:45.139 "unmap": true, 00:34:45.139 "flush": true, 00:34:45.139 "reset": true, 00:34:45.139 "nvme_admin": false, 00:34:45.139 "nvme_io": false, 00:34:45.139 "nvme_io_md": false, 00:34:45.139 "write_zeroes": true, 00:34:45.139 "zcopy": true, 00:34:45.139 "get_zone_info": false, 00:34:45.139 "zone_management": false, 00:34:45.139 "zone_append": false, 00:34:45.139 "compare": false, 00:34:45.139 "compare_and_write": false, 00:34:45.139 "abort": true, 00:34:45.139 "seek_hole": false, 00:34:45.139 "seek_data": false, 00:34:45.139 "copy": true, 00:34:45.139 "nvme_iov_md": false 00:34:45.139 }, 00:34:45.139 "memory_domains": [ 00:34:45.139 { 00:34:45.139 "dma_device_id": "system", 00:34:45.139 "dma_device_type": 1 00:34:45.139 }, 00:34:45.139 { 00:34:45.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:45.139 "dma_device_type": 2 00:34:45.139 } 00:34:45.139 ], 00:34:45.139 "driver_specific": {} 00:34:45.139 } 00:34:45.139 ] 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:45.139 "name": "Existed_Raid", 00:34:45.139 "uuid": "af3900f3-0941-44e1-89cb-110f6ffaa8bb", 00:34:45.139 "strip_size_kb": 0, 00:34:45.139 "state": "online", 00:34:45.139 "raid_level": "raid1", 00:34:45.139 "superblock": true, 00:34:45.139 "num_base_bdevs": 2, 00:34:45.139 "num_base_bdevs_discovered": 2, 00:34:45.139 "num_base_bdevs_operational": 2, 00:34:45.139 "base_bdevs_list": [ 00:34:45.139 { 00:34:45.139 "name": "BaseBdev1", 00:34:45.139 "uuid": "97f7d3da-3dcb-41e9-9830-7f79304df6fd", 00:34:45.139 "is_configured": true, 00:34:45.139 "data_offset": 256, 00:34:45.139 "data_size": 7936 00:34:45.139 }, 00:34:45.139 { 00:34:45.139 "name": "BaseBdev2", 00:34:45.139 "uuid": "38ad676f-7e26-4fb9-8981-a93b590df801", 00:34:45.139 "is_configured": true, 00:34:45.139 "data_offset": 256, 00:34:45.139 "data_size": 7936 00:34:45.139 } 00:34:45.139 ] 00:34:45.139 }' 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:45.139 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.707 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:34:45.707 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:45.707 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:45.707 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:45.707 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:34:45.707 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:45.707 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:45.707 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.707 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:45.707 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.707 [2024-10-28 13:45:59.713618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:45.707 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.707 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:45.707 "name": "Existed_Raid", 00:34:45.707 "aliases": [ 00:34:45.707 "af3900f3-0941-44e1-89cb-110f6ffaa8bb" 00:34:45.707 ], 00:34:45.707 "product_name": "Raid Volume", 00:34:45.707 "block_size": 4128, 00:34:45.707 "num_blocks": 7936, 00:34:45.707 "uuid": "af3900f3-0941-44e1-89cb-110f6ffaa8bb", 00:34:45.707 "md_size": 32, 00:34:45.707 "md_interleave": true, 00:34:45.707 "dif_type": 0, 00:34:45.707 "assigned_rate_limits": { 00:34:45.707 "rw_ios_per_sec": 0, 00:34:45.707 "rw_mbytes_per_sec": 0, 00:34:45.707 "r_mbytes_per_sec": 0, 00:34:45.707 "w_mbytes_per_sec": 0 00:34:45.707 }, 00:34:45.707 "claimed": false, 00:34:45.707 "zoned": false, 00:34:45.707 "supported_io_types": { 00:34:45.707 "read": true, 00:34:45.707 "write": true, 00:34:45.707 "unmap": false, 00:34:45.708 "flush": false, 00:34:45.708 "reset": true, 00:34:45.708 "nvme_admin": false, 00:34:45.708 "nvme_io": false, 00:34:45.708 "nvme_io_md": false, 00:34:45.708 "write_zeroes": true, 00:34:45.708 "zcopy": false, 00:34:45.708 "get_zone_info": false, 00:34:45.708 "zone_management": false, 00:34:45.708 "zone_append": false, 00:34:45.708 "compare": false, 00:34:45.708 "compare_and_write": false, 00:34:45.708 "abort": false, 00:34:45.708 "seek_hole": false, 00:34:45.708 "seek_data": false, 00:34:45.708 "copy": false, 00:34:45.708 "nvme_iov_md": false 00:34:45.708 }, 00:34:45.708 "memory_domains": [ 00:34:45.708 { 00:34:45.708 "dma_device_id": "system", 00:34:45.708 "dma_device_type": 1 00:34:45.708 }, 00:34:45.708 { 00:34:45.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:45.708 "dma_device_type": 2 00:34:45.708 }, 00:34:45.708 { 00:34:45.708 "dma_device_id": "system", 00:34:45.708 "dma_device_type": 1 00:34:45.708 }, 00:34:45.708 { 00:34:45.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:45.708 "dma_device_type": 2 00:34:45.708 } 00:34:45.708 ], 00:34:45.708 "driver_specific": { 00:34:45.708 "raid": { 00:34:45.708 "uuid": "af3900f3-0941-44e1-89cb-110f6ffaa8bb", 00:34:45.708 "strip_size_kb": 0, 00:34:45.708 "state": "online", 00:34:45.708 "raid_level": "raid1", 00:34:45.708 "superblock": true, 00:34:45.708 "num_base_bdevs": 2, 00:34:45.708 "num_base_bdevs_discovered": 2, 00:34:45.708 "num_base_bdevs_operational": 2, 00:34:45.708 "base_bdevs_list": [ 00:34:45.708 { 00:34:45.708 "name": "BaseBdev1", 00:34:45.708 "uuid": "97f7d3da-3dcb-41e9-9830-7f79304df6fd", 00:34:45.708 "is_configured": true, 00:34:45.708 "data_offset": 256, 00:34:45.708 "data_size": 7936 00:34:45.708 }, 00:34:45.708 { 00:34:45.708 "name": "BaseBdev2", 00:34:45.708 "uuid": "38ad676f-7e26-4fb9-8981-a93b590df801", 00:34:45.708 "is_configured": true, 00:34:45.708 "data_offset": 256, 00:34:45.708 "data_size": 7936 00:34:45.708 } 00:34:45.708 ] 00:34:45.708 } 00:34:45.708 } 00:34:45.708 }' 00:34:45.708 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:45.708 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:34:45.708 BaseBdev2' 00:34:45.708 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:45.708 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:34:45.708 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:45.708 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:34:45.708 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.708 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.708 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.967 [2024-10-28 13:45:59.973332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.967 13:45:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:45.967 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.967 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:45.967 "name": "Existed_Raid", 00:34:45.967 "uuid": "af3900f3-0941-44e1-89cb-110f6ffaa8bb", 00:34:45.967 "strip_size_kb": 0, 00:34:45.967 "state": "online", 00:34:45.967 "raid_level": "raid1", 00:34:45.967 "superblock": true, 00:34:45.967 "num_base_bdevs": 2, 00:34:45.967 "num_base_bdevs_discovered": 1, 00:34:45.967 "num_base_bdevs_operational": 1, 00:34:45.967 "base_bdevs_list": [ 00:34:45.967 { 00:34:45.967 "name": null, 00:34:45.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:45.967 "is_configured": false, 00:34:45.967 "data_offset": 0, 00:34:45.967 "data_size": 7936 00:34:45.967 }, 00:34:45.967 { 00:34:45.967 "name": "BaseBdev2", 00:34:45.967 "uuid": "38ad676f-7e26-4fb9-8981-a93b590df801", 00:34:45.967 "is_configured": true, 00:34:45.967 "data_offset": 256, 00:34:45.967 "data_size": 7936 00:34:45.967 } 00:34:45.967 ] 00:34:45.967 }' 00:34:45.967 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:45.967 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:46.535 [2024-10-28 13:46:00.580868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:46.535 [2024-10-28 13:46:00.581002] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:46.535 [2024-10-28 13:46:00.593635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:46.535 [2024-10-28 13:46:00.593911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:46.535 [2024-10-28 13:46:00.593939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 101142 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 101142 ']' 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 101142 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101142 00:34:46.535 killing process with pid 101142 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101142' 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 101142 00:34:46.535 [2024-10-28 13:46:00.688111] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:46.535 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 101142 00:34:46.535 [2024-10-28 13:46:00.689485] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:46.793 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:34:46.793 00:34:46.793 real 0m4.558s 00:34:46.793 user 0m7.418s 00:34:46.793 sys 0m0.814s 00:34:46.793 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:46.793 ************************************ 00:34:46.793 END TEST raid_state_function_test_sb_md_interleaved 00:34:46.793 ************************************ 00:34:46.793 13:46:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:47.051 13:46:00 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:34:47.051 13:46:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:47.051 13:46:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:47.051 13:46:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:47.051 ************************************ 00:34:47.051 START TEST raid_superblock_test_md_interleaved 00:34:47.051 ************************************ 00:34:47.051 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:34:47.051 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:34:47.051 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:34:47.051 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:34:47.051 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:34:47.051 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:34:47.051 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:34:47.051 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:34:47.051 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:34:47.051 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:34:47.051 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:34:47.051 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:34:47.052 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:34:47.052 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:34:47.052 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:34:47.052 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:34:47.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.052 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=101389 00:34:47.052 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 101389 00:34:47.052 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 101389 ']' 00:34:47.052 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:34:47.052 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.052 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:47.052 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.052 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:47.052 13:46:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:47.052 [2024-10-28 13:46:01.105937] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:34:47.052 [2024-10-28 13:46:01.106496] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101389 ] 00:34:47.310 [2024-10-28 13:46:01.263870] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:47.310 [2024-10-28 13:46:01.291941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:47.310 [2024-10-28 13:46:01.333957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.310 [2024-10-28 13:46:01.391325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:47.310 [2024-10-28 13:46:01.391372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.245 malloc1 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.245 [2024-10-28 13:46:02.112948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:48.245 [2024-10-28 13:46:02.113212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:48.245 [2024-10-28 13:46:02.113262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:48.245 [2024-10-28 13:46:02.113279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:48.245 [2024-10-28 13:46:02.116028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:48.245 [2024-10-28 13:46:02.116076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:48.245 pt1 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.245 malloc2 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.245 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.245 [2024-10-28 13:46:02.145641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:48.245 [2024-10-28 13:46:02.145726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:48.245 [2024-10-28 13:46:02.145754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:48.245 [2024-10-28 13:46:02.145768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:48.245 [2024-10-28 13:46:02.148297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:48.245 [2024-10-28 13:46:02.148338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:48.245 pt2 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.246 [2024-10-28 13:46:02.157667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:48.246 [2024-10-28 13:46:02.160132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:48.246 [2024-10-28 13:46:02.160579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:34:48.246 [2024-10-28 13:46:02.160605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:48.246 [2024-10-28 13:46:02.160716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:34:48.246 [2024-10-28 13:46:02.160842] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:34:48.246 [2024-10-28 13:46:02.160860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:34:48.246 [2024-10-28 13:46:02.160964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:48.246 "name": "raid_bdev1", 00:34:48.246 "uuid": "4da5dcf0-17aa-4265-947f-567201948e8f", 00:34:48.246 "strip_size_kb": 0, 00:34:48.246 "state": "online", 00:34:48.246 "raid_level": "raid1", 00:34:48.246 "superblock": true, 00:34:48.246 "num_base_bdevs": 2, 00:34:48.246 "num_base_bdevs_discovered": 2, 00:34:48.246 "num_base_bdevs_operational": 2, 00:34:48.246 "base_bdevs_list": [ 00:34:48.246 { 00:34:48.246 "name": "pt1", 00:34:48.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:48.246 "is_configured": true, 00:34:48.246 "data_offset": 256, 00:34:48.246 "data_size": 7936 00:34:48.246 }, 00:34:48.246 { 00:34:48.246 "name": "pt2", 00:34:48.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:48.246 "is_configured": true, 00:34:48.246 "data_offset": 256, 00:34:48.246 "data_size": 7936 00:34:48.246 } 00:34:48.246 ] 00:34:48.246 }' 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:48.246 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.813 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:34:48.813 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:48.813 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:48.813 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:48.813 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:34:48.813 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:48.813 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:48.813 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.813 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.813 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:48.813 [2024-10-28 13:46:02.738360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:48.813 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.813 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:48.813 "name": "raid_bdev1", 00:34:48.813 "aliases": [ 00:34:48.813 "4da5dcf0-17aa-4265-947f-567201948e8f" 00:34:48.813 ], 00:34:48.813 "product_name": "Raid Volume", 00:34:48.813 "block_size": 4128, 00:34:48.813 "num_blocks": 7936, 00:34:48.813 "uuid": "4da5dcf0-17aa-4265-947f-567201948e8f", 00:34:48.813 "md_size": 32, 00:34:48.813 "md_interleave": true, 00:34:48.813 "dif_type": 0, 00:34:48.813 "assigned_rate_limits": { 00:34:48.813 "rw_ios_per_sec": 0, 00:34:48.813 "rw_mbytes_per_sec": 0, 00:34:48.813 "r_mbytes_per_sec": 0, 00:34:48.813 "w_mbytes_per_sec": 0 00:34:48.813 }, 00:34:48.813 "claimed": false, 00:34:48.813 "zoned": false, 00:34:48.813 "supported_io_types": { 00:34:48.813 "read": true, 00:34:48.813 "write": true, 00:34:48.813 "unmap": false, 00:34:48.814 "flush": false, 00:34:48.814 "reset": true, 00:34:48.814 "nvme_admin": false, 00:34:48.814 "nvme_io": false, 00:34:48.814 "nvme_io_md": false, 00:34:48.814 "write_zeroes": true, 00:34:48.814 "zcopy": false, 00:34:48.814 "get_zone_info": false, 00:34:48.814 "zone_management": false, 00:34:48.814 "zone_append": false, 00:34:48.814 "compare": false, 00:34:48.814 "compare_and_write": false, 00:34:48.814 "abort": false, 00:34:48.814 "seek_hole": false, 00:34:48.814 "seek_data": false, 00:34:48.814 "copy": false, 00:34:48.814 "nvme_iov_md": false 00:34:48.814 }, 00:34:48.814 "memory_domains": [ 00:34:48.814 { 00:34:48.814 "dma_device_id": "system", 00:34:48.814 "dma_device_type": 1 00:34:48.814 }, 00:34:48.814 { 00:34:48.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:48.814 "dma_device_type": 2 00:34:48.814 }, 00:34:48.814 { 00:34:48.814 "dma_device_id": "system", 00:34:48.814 "dma_device_type": 1 00:34:48.814 }, 00:34:48.814 { 00:34:48.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:48.814 "dma_device_type": 2 00:34:48.814 } 00:34:48.814 ], 00:34:48.814 "driver_specific": { 00:34:48.814 "raid": { 00:34:48.814 "uuid": "4da5dcf0-17aa-4265-947f-567201948e8f", 00:34:48.814 "strip_size_kb": 0, 00:34:48.814 "state": "online", 00:34:48.814 "raid_level": "raid1", 00:34:48.814 "superblock": true, 00:34:48.814 "num_base_bdevs": 2, 00:34:48.814 "num_base_bdevs_discovered": 2, 00:34:48.814 "num_base_bdevs_operational": 2, 00:34:48.814 "base_bdevs_list": [ 00:34:48.814 { 00:34:48.814 "name": "pt1", 00:34:48.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:48.814 "is_configured": true, 00:34:48.814 "data_offset": 256, 00:34:48.814 "data_size": 7936 00:34:48.814 }, 00:34:48.814 { 00:34:48.814 "name": "pt2", 00:34:48.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:48.814 "is_configured": true, 00:34:48.814 "data_offset": 256, 00:34:48.814 "data_size": 7936 00:34:48.814 } 00:34:48.814 ] 00:34:48.814 } 00:34:48.814 } 00:34:48.814 }' 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:48.814 pt2' 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:48.814 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.072 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:34:49.072 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:34:49.072 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.072 13:46:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.072 [2024-10-28 13:46:03.006334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4da5dcf0-17aa-4265-947f-567201948e8f 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 4da5dcf0-17aa-4265-947f-567201948e8f ']' 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.072 [2024-10-28 13:46:03.057959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:49.072 [2024-10-28 13:46:03.057998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:49.072 [2024-10-28 13:46:03.058111] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:49.072 [2024-10-28 13:46:03.058259] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:49.072 [2024-10-28 13:46:03.058285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.072 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.073 [2024-10-28 13:46:03.198040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:49.073 [2024-10-28 13:46:03.200801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:49.073 [2024-10-28 13:46:03.200883] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:49.073 [2024-10-28 13:46:03.200968] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:49.073 [2024-10-28 13:46:03.200994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:49.073 [2024-10-28 13:46:03.201009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:34:49.073 request: 00:34:49.073 { 00:34:49.073 "name": "raid_bdev1", 00:34:49.073 "raid_level": "raid1", 00:34:49.073 "base_bdevs": [ 00:34:49.073 "malloc1", 00:34:49.073 "malloc2" 00:34:49.073 ], 00:34:49.073 "superblock": false, 00:34:49.073 "method": "bdev_raid_create", 00:34:49.073 "req_id": 1 00:34:49.073 } 00:34:49.073 Got JSON-RPC error response 00:34:49.073 response: 00:34:49.073 { 00:34:49.073 "code": -17, 00:34:49.073 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:49.073 } 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.073 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.331 [2024-10-28 13:46:03.262024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:49.331 [2024-10-28 13:46:03.262101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:49.331 [2024-10-28 13:46:03.262129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:49.331 [2024-10-28 13:46:03.262165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:49.331 [2024-10-28 13:46:03.264969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:49.331 [2024-10-28 13:46:03.265224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:49.331 [2024-10-28 13:46:03.265301] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:49.331 [2024-10-28 13:46:03.265365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:49.331 pt1 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:49.331 "name": "raid_bdev1", 00:34:49.331 "uuid": "4da5dcf0-17aa-4265-947f-567201948e8f", 00:34:49.331 "strip_size_kb": 0, 00:34:49.331 "state": "configuring", 00:34:49.331 "raid_level": "raid1", 00:34:49.331 "superblock": true, 00:34:49.331 "num_base_bdevs": 2, 00:34:49.331 "num_base_bdevs_discovered": 1, 00:34:49.331 "num_base_bdevs_operational": 2, 00:34:49.331 "base_bdevs_list": [ 00:34:49.331 { 00:34:49.331 "name": "pt1", 00:34:49.331 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:49.331 "is_configured": true, 00:34:49.331 "data_offset": 256, 00:34:49.331 "data_size": 7936 00:34:49.331 }, 00:34:49.331 { 00:34:49.331 "name": null, 00:34:49.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:49.331 "is_configured": false, 00:34:49.331 "data_offset": 256, 00:34:49.331 "data_size": 7936 00:34:49.331 } 00:34:49.331 ] 00:34:49.331 }' 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:49.331 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.898 [2024-10-28 13:46:03.806349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:49.898 [2024-10-28 13:46:03.806457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:49.898 [2024-10-28 13:46:03.806490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:49.898 [2024-10-28 13:46:03.806508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:49.898 [2024-10-28 13:46:03.806711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:49.898 [2024-10-28 13:46:03.806739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:49.898 [2024-10-28 13:46:03.806835] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:49.898 [2024-10-28 13:46:03.806870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:49.898 [2024-10-28 13:46:03.806970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:34:49.898 [2024-10-28 13:46:03.806989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:49.898 [2024-10-28 13:46:03.807072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:49.898 [2024-10-28 13:46:03.807173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:34:49.898 [2024-10-28 13:46:03.807244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:34:49.898 [2024-10-28 13:46:03.807341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:49.898 pt2 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:49.898 "name": "raid_bdev1", 00:34:49.898 "uuid": "4da5dcf0-17aa-4265-947f-567201948e8f", 00:34:49.898 "strip_size_kb": 0, 00:34:49.898 "state": "online", 00:34:49.898 "raid_level": "raid1", 00:34:49.898 "superblock": true, 00:34:49.898 "num_base_bdevs": 2, 00:34:49.898 "num_base_bdevs_discovered": 2, 00:34:49.898 "num_base_bdevs_operational": 2, 00:34:49.898 "base_bdevs_list": [ 00:34:49.898 { 00:34:49.898 "name": "pt1", 00:34:49.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:49.898 "is_configured": true, 00:34:49.898 "data_offset": 256, 00:34:49.898 "data_size": 7936 00:34:49.898 }, 00:34:49.898 { 00:34:49.898 "name": "pt2", 00:34:49.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:49.898 "is_configured": true, 00:34:49.898 "data_offset": 256, 00:34:49.898 "data_size": 7936 00:34:49.898 } 00:34:49.898 ] 00:34:49.898 }' 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:49.898 13:46:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.480 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:34:50.480 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:50.480 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:50.480 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:50.480 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:34:50.480 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:50.480 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:50.480 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:50.480 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.480 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.480 [2024-10-28 13:46:04.362955] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:50.480 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.480 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:50.480 "name": "raid_bdev1", 00:34:50.480 "aliases": [ 00:34:50.480 "4da5dcf0-17aa-4265-947f-567201948e8f" 00:34:50.480 ], 00:34:50.480 "product_name": "Raid Volume", 00:34:50.480 "block_size": 4128, 00:34:50.480 "num_blocks": 7936, 00:34:50.480 "uuid": "4da5dcf0-17aa-4265-947f-567201948e8f", 00:34:50.480 "md_size": 32, 00:34:50.480 "md_interleave": true, 00:34:50.480 "dif_type": 0, 00:34:50.480 "assigned_rate_limits": { 00:34:50.480 "rw_ios_per_sec": 0, 00:34:50.480 "rw_mbytes_per_sec": 0, 00:34:50.480 "r_mbytes_per_sec": 0, 00:34:50.480 "w_mbytes_per_sec": 0 00:34:50.480 }, 00:34:50.480 "claimed": false, 00:34:50.480 "zoned": false, 00:34:50.480 "supported_io_types": { 00:34:50.480 "read": true, 00:34:50.480 "write": true, 00:34:50.480 "unmap": false, 00:34:50.480 "flush": false, 00:34:50.480 "reset": true, 00:34:50.480 "nvme_admin": false, 00:34:50.480 "nvme_io": false, 00:34:50.480 "nvme_io_md": false, 00:34:50.480 "write_zeroes": true, 00:34:50.480 "zcopy": false, 00:34:50.480 "get_zone_info": false, 00:34:50.480 "zone_management": false, 00:34:50.480 "zone_append": false, 00:34:50.480 "compare": false, 00:34:50.480 "compare_and_write": false, 00:34:50.480 "abort": false, 00:34:50.480 "seek_hole": false, 00:34:50.480 "seek_data": false, 00:34:50.480 "copy": false, 00:34:50.480 "nvme_iov_md": false 00:34:50.480 }, 00:34:50.480 "memory_domains": [ 00:34:50.480 { 00:34:50.480 "dma_device_id": "system", 00:34:50.480 "dma_device_type": 1 00:34:50.480 }, 00:34:50.480 { 00:34:50.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:50.480 "dma_device_type": 2 00:34:50.480 }, 00:34:50.480 { 00:34:50.480 "dma_device_id": "system", 00:34:50.480 "dma_device_type": 1 00:34:50.480 }, 00:34:50.480 { 00:34:50.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:50.480 "dma_device_type": 2 00:34:50.480 } 00:34:50.480 ], 00:34:50.480 "driver_specific": { 00:34:50.480 "raid": { 00:34:50.480 "uuid": "4da5dcf0-17aa-4265-947f-567201948e8f", 00:34:50.481 "strip_size_kb": 0, 00:34:50.481 "state": "online", 00:34:50.481 "raid_level": "raid1", 00:34:50.481 "superblock": true, 00:34:50.481 "num_base_bdevs": 2, 00:34:50.481 "num_base_bdevs_discovered": 2, 00:34:50.481 "num_base_bdevs_operational": 2, 00:34:50.481 "base_bdevs_list": [ 00:34:50.481 { 00:34:50.481 "name": "pt1", 00:34:50.481 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:50.481 "is_configured": true, 00:34:50.481 "data_offset": 256, 00:34:50.481 "data_size": 7936 00:34:50.481 }, 00:34:50.481 { 00:34:50.481 "name": "pt2", 00:34:50.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:50.481 "is_configured": true, 00:34:50.481 "data_offset": 256, 00:34:50.481 "data_size": 7936 00:34:50.481 } 00:34:50.481 ] 00:34:50.481 } 00:34:50.481 } 00:34:50.481 }' 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:50.481 pt2' 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:50.481 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:34:50.749 [2024-10-28 13:46:04.639048] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 4da5dcf0-17aa-4265-947f-567201948e8f '!=' 4da5dcf0-17aa-4265-947f-567201948e8f ']' 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.749 [2024-10-28 13:46:04.690755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.749 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:50.749 "name": "raid_bdev1", 00:34:50.749 "uuid": "4da5dcf0-17aa-4265-947f-567201948e8f", 00:34:50.750 "strip_size_kb": 0, 00:34:50.750 "state": "online", 00:34:50.750 "raid_level": "raid1", 00:34:50.750 "superblock": true, 00:34:50.750 "num_base_bdevs": 2, 00:34:50.750 "num_base_bdevs_discovered": 1, 00:34:50.750 "num_base_bdevs_operational": 1, 00:34:50.750 "base_bdevs_list": [ 00:34:50.750 { 00:34:50.750 "name": null, 00:34:50.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:50.750 "is_configured": false, 00:34:50.750 "data_offset": 0, 00:34:50.750 "data_size": 7936 00:34:50.750 }, 00:34:50.750 { 00:34:50.750 "name": "pt2", 00:34:50.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:50.750 "is_configured": true, 00:34:50.750 "data_offset": 256, 00:34:50.750 "data_size": 7936 00:34:50.750 } 00:34:50.750 ] 00:34:50.750 }' 00:34:50.750 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:50.750 13:46:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.317 [2024-10-28 13:46:05.231052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:51.317 [2024-10-28 13:46:05.231085] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:51.317 [2024-10-28 13:46:05.231235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:51.317 [2024-10-28 13:46:05.231302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:51.317 [2024-10-28 13:46:05.231321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.317 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.317 [2024-10-28 13:46:05.307053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:51.317 [2024-10-28 13:46:05.307140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:51.317 [2024-10-28 13:46:05.307186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:51.317 [2024-10-28 13:46:05.307205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:51.317 [2024-10-28 13:46:05.310057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:51.317 [2024-10-28 13:46:05.310133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:51.317 [2024-10-28 13:46:05.310223] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:51.317 [2024-10-28 13:46:05.310276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:51.317 [2024-10-28 13:46:05.310358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:51.317 [2024-10-28 13:46:05.310377] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:51.317 [2024-10-28 13:46:05.310506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:34:51.317 [2024-10-28 13:46:05.310596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:51.317 [2024-10-28 13:46:05.310609] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:34:51.317 [2024-10-28 13:46:05.310690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:51.317 pt2 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:51.318 "name": "raid_bdev1", 00:34:51.318 "uuid": "4da5dcf0-17aa-4265-947f-567201948e8f", 00:34:51.318 "strip_size_kb": 0, 00:34:51.318 "state": "online", 00:34:51.318 "raid_level": "raid1", 00:34:51.318 "superblock": true, 00:34:51.318 "num_base_bdevs": 2, 00:34:51.318 "num_base_bdevs_discovered": 1, 00:34:51.318 "num_base_bdevs_operational": 1, 00:34:51.318 "base_bdevs_list": [ 00:34:51.318 { 00:34:51.318 "name": null, 00:34:51.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:51.318 "is_configured": false, 00:34:51.318 "data_offset": 256, 00:34:51.318 "data_size": 7936 00:34:51.318 }, 00:34:51.318 { 00:34:51.318 "name": "pt2", 00:34:51.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:51.318 "is_configured": true, 00:34:51.318 "data_offset": 256, 00:34:51.318 "data_size": 7936 00:34:51.318 } 00:34:51.318 ] 00:34:51.318 }' 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:51.318 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.883 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:51.883 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.883 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.883 [2024-10-28 13:46:05.855299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:51.883 [2024-10-28 13:46:05.855511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:51.884 [2024-10-28 13:46:05.855632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:51.884 [2024-10-28 13:46:05.855707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:51.884 [2024-10-28 13:46:05.855724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.884 [2024-10-28 13:46:05.923284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:51.884 [2024-10-28 13:46:05.923362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:51.884 [2024-10-28 13:46:05.923424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:34:51.884 [2024-10-28 13:46:05.923442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:51.884 [2024-10-28 13:46:05.926105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:51.884 [2024-10-28 13:46:05.926190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:51.884 [2024-10-28 13:46:05.926272] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:51.884 [2024-10-28 13:46:05.926317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:51.884 [2024-10-28 13:46:05.926442] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:34:51.884 [2024-10-28 13:46:05.926461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:51.884 [2024-10-28 13:46:05.926498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:34:51.884 [2024-10-28 13:46:05.926559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:51.884 [2024-10-28 13:46:05.926709] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:34:51.884 [2024-10-28 13:46:05.926733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:51.884 [2024-10-28 13:46:05.926810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:34:51.884 [2024-10-28 13:46:05.926894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:34:51.884 [2024-10-28 13:46:05.926912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:34:51.884 [2024-10-28 13:46:05.927002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:51.884 pt1 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:51.884 "name": "raid_bdev1", 00:34:51.884 "uuid": "4da5dcf0-17aa-4265-947f-567201948e8f", 00:34:51.884 "strip_size_kb": 0, 00:34:51.884 "state": "online", 00:34:51.884 "raid_level": "raid1", 00:34:51.884 "superblock": true, 00:34:51.884 "num_base_bdevs": 2, 00:34:51.884 "num_base_bdevs_discovered": 1, 00:34:51.884 "num_base_bdevs_operational": 1, 00:34:51.884 "base_bdevs_list": [ 00:34:51.884 { 00:34:51.884 "name": null, 00:34:51.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:51.884 "is_configured": false, 00:34:51.884 "data_offset": 256, 00:34:51.884 "data_size": 7936 00:34:51.884 }, 00:34:51.884 { 00:34:51.884 "name": "pt2", 00:34:51.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:51.884 "is_configured": true, 00:34:51.884 "data_offset": 256, 00:34:51.884 "data_size": 7936 00:34:51.884 } 00:34:51.884 ] 00:34:51.884 }' 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:51.884 13:46:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:52.451 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:52.452 [2024-10-28 13:46:06.523832] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 4da5dcf0-17aa-4265-947f-567201948e8f '!=' 4da5dcf0-17aa-4265-947f-567201948e8f ']' 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 101389 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 101389 ']' 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 101389 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101389 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:52.452 killing process with pid 101389 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101389' 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 101389 00:34:52.452 [2024-10-28 13:46:06.607095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:52.452 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 101389 00:34:52.452 [2024-10-28 13:46:06.607246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:52.452 [2024-10-28 13:46:06.607317] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:52.452 [2024-10-28 13:46:06.607337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:34:52.711 [2024-10-28 13:46:06.632979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:52.970 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:34:52.970 00:34:52.970 real 0m5.878s 00:34:52.970 user 0m9.958s 00:34:52.970 sys 0m1.009s 00:34:52.970 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:52.970 13:46:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:52.970 ************************************ 00:34:52.970 END TEST raid_superblock_test_md_interleaved 00:34:52.970 ************************************ 00:34:52.970 13:46:06 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:34:52.970 13:46:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:34:52.970 13:46:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:52.970 13:46:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:52.970 ************************************ 00:34:52.970 START TEST raid_rebuild_test_sb_md_interleaved 00:34:52.970 ************************************ 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=101706 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 101706 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 101706 ']' 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:52.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:52.970 13:46:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:52.970 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:52.970 Zero copy mechanism will not be used. 00:34:52.970 [2024-10-28 13:46:07.057593] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:34:52.970 [2024-10-28 13:46:07.057792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101706 ] 00:34:53.229 [2024-10-28 13:46:07.214469] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:53.229 [2024-10-28 13:46:07.243116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.229 [2024-10-28 13:46:07.289645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.229 [2024-10-28 13:46:07.347171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:53.229 [2024-10-28 13:46:07.347234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.164 BaseBdev1_malloc 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.164 [2024-10-28 13:46:08.093082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:54.164 [2024-10-28 13:46:08.093186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:54.164 [2024-10-28 13:46:08.093220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:54.164 [2024-10-28 13:46:08.093251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:54.164 [2024-10-28 13:46:08.096044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:54.164 [2024-10-28 13:46:08.096311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:54.164 BaseBdev1 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.164 BaseBdev2_malloc 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.164 [2024-10-28 13:46:08.118383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:54.164 [2024-10-28 13:46:08.118526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:54.164 [2024-10-28 13:46:08.118554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:54.164 [2024-10-28 13:46:08.118574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:54.164 [2024-10-28 13:46:08.121219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:54.164 [2024-10-28 13:46:08.121295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:54.164 BaseBdev2 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.164 spare_malloc 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.164 spare_delay 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.164 [2024-10-28 13:46:08.150512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:54.164 [2024-10-28 13:46:08.150599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:54.164 [2024-10-28 13:46:08.150630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:54.164 [2024-10-28 13:46:08.150647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:54.164 [2024-10-28 13:46:08.153117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:54.164 [2024-10-28 13:46:08.153191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:54.164 spare 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.164 [2024-10-28 13:46:08.158564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:54.164 [2024-10-28 13:46:08.160984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:54.164 [2024-10-28 13:46:08.161188] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:34:54.164 [2024-10-28 13:46:08.161210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:54.164 [2024-10-28 13:46:08.161297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:54.164 [2024-10-28 13:46:08.161387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:34:54.164 [2024-10-28 13:46:08.161404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:34:54.164 [2024-10-28 13:46:08.161494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:54.164 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.165 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:54.165 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.165 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.165 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.165 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:54.165 "name": "raid_bdev1", 00:34:54.165 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:34:54.165 "strip_size_kb": 0, 00:34:54.165 "state": "online", 00:34:54.165 "raid_level": "raid1", 00:34:54.165 "superblock": true, 00:34:54.165 "num_base_bdevs": 2, 00:34:54.165 "num_base_bdevs_discovered": 2, 00:34:54.165 "num_base_bdevs_operational": 2, 00:34:54.165 "base_bdevs_list": [ 00:34:54.165 { 00:34:54.165 "name": "BaseBdev1", 00:34:54.165 "uuid": "cff581b1-5531-5a33-8dc9-47a66ac42710", 00:34:54.165 "is_configured": true, 00:34:54.165 "data_offset": 256, 00:34:54.165 "data_size": 7936 00:34:54.165 }, 00:34:54.165 { 00:34:54.165 "name": "BaseBdev2", 00:34:54.165 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:34:54.165 "is_configured": true, 00:34:54.165 "data_offset": 256, 00:34:54.165 "data_size": 7936 00:34:54.165 } 00:34:54.165 ] 00:34:54.165 }' 00:34:54.165 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:54.165 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.732 [2024-10-28 13:46:08.651162] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.732 [2024-10-28 13:46:08.758762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:54.732 "name": "raid_bdev1", 00:34:54.732 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:34:54.732 "strip_size_kb": 0, 00:34:54.732 "state": "online", 00:34:54.732 "raid_level": "raid1", 00:34:54.732 "superblock": true, 00:34:54.732 "num_base_bdevs": 2, 00:34:54.732 "num_base_bdevs_discovered": 1, 00:34:54.732 "num_base_bdevs_operational": 1, 00:34:54.732 "base_bdevs_list": [ 00:34:54.732 { 00:34:54.732 "name": null, 00:34:54.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:54.732 "is_configured": false, 00:34:54.732 "data_offset": 0, 00:34:54.732 "data_size": 7936 00:34:54.732 }, 00:34:54.732 { 00:34:54.732 "name": "BaseBdev2", 00:34:54.732 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:34:54.732 "is_configured": true, 00:34:54.732 "data_offset": 256, 00:34:54.732 "data_size": 7936 00:34:54.732 } 00:34:54.732 ] 00:34:54.732 }' 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:54.732 13:46:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:55.299 13:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:55.299 13:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.299 13:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:55.299 [2024-10-28 13:46:09.279114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:55.299 [2024-10-28 13:46:09.296181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:34:55.299 13:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.299 13:46:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:34:55.299 [2024-10-28 13:46:09.298865] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:56.233 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:56.233 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:56.233 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:56.233 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:56.233 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:56.233 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:56.233 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:56.233 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.233 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:56.233 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.233 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:56.233 "name": "raid_bdev1", 00:34:56.233 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:34:56.233 "strip_size_kb": 0, 00:34:56.233 "state": "online", 00:34:56.233 "raid_level": "raid1", 00:34:56.233 "superblock": true, 00:34:56.233 "num_base_bdevs": 2, 00:34:56.233 "num_base_bdevs_discovered": 2, 00:34:56.233 "num_base_bdevs_operational": 2, 00:34:56.233 "process": { 00:34:56.233 "type": "rebuild", 00:34:56.233 "target": "spare", 00:34:56.233 "progress": { 00:34:56.233 "blocks": 2560, 00:34:56.233 "percent": 32 00:34:56.233 } 00:34:56.233 }, 00:34:56.233 "base_bdevs_list": [ 00:34:56.233 { 00:34:56.233 "name": "spare", 00:34:56.233 "uuid": "0a9d71fd-49b1-5a3b-8b1f-511f784a4c24", 00:34:56.233 "is_configured": true, 00:34:56.233 "data_offset": 256, 00:34:56.233 "data_size": 7936 00:34:56.233 }, 00:34:56.233 { 00:34:56.233 "name": "BaseBdev2", 00:34:56.233 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:34:56.233 "is_configured": true, 00:34:56.233 "data_offset": 256, 00:34:56.233 "data_size": 7936 00:34:56.233 } 00:34:56.233 ] 00:34:56.233 }' 00:34:56.233 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:56.492 [2024-10-28 13:46:10.468978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:56.492 [2024-10-28 13:46:10.508982] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:56.492 [2024-10-28 13:46:10.509080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:56.492 [2024-10-28 13:46:10.509102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:56.492 [2024-10-28 13:46:10.509122] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:56.492 "name": "raid_bdev1", 00:34:56.492 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:34:56.492 "strip_size_kb": 0, 00:34:56.492 "state": "online", 00:34:56.492 "raid_level": "raid1", 00:34:56.492 "superblock": true, 00:34:56.492 "num_base_bdevs": 2, 00:34:56.492 "num_base_bdevs_discovered": 1, 00:34:56.492 "num_base_bdevs_operational": 1, 00:34:56.492 "base_bdevs_list": [ 00:34:56.492 { 00:34:56.492 "name": null, 00:34:56.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.492 "is_configured": false, 00:34:56.492 "data_offset": 0, 00:34:56.492 "data_size": 7936 00:34:56.492 }, 00:34:56.492 { 00:34:56.492 "name": "BaseBdev2", 00:34:56.492 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:34:56.492 "is_configured": true, 00:34:56.492 "data_offset": 256, 00:34:56.492 "data_size": 7936 00:34:56.492 } 00:34:56.492 ] 00:34:56.492 }' 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:56.492 13:46:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:57.060 "name": "raid_bdev1", 00:34:57.060 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:34:57.060 "strip_size_kb": 0, 00:34:57.060 "state": "online", 00:34:57.060 "raid_level": "raid1", 00:34:57.060 "superblock": true, 00:34:57.060 "num_base_bdevs": 2, 00:34:57.060 "num_base_bdevs_discovered": 1, 00:34:57.060 "num_base_bdevs_operational": 1, 00:34:57.060 "base_bdevs_list": [ 00:34:57.060 { 00:34:57.060 "name": null, 00:34:57.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.060 "is_configured": false, 00:34:57.060 "data_offset": 0, 00:34:57.060 "data_size": 7936 00:34:57.060 }, 00:34:57.060 { 00:34:57.060 "name": "BaseBdev2", 00:34:57.060 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:34:57.060 "is_configured": true, 00:34:57.060 "data_offset": 256, 00:34:57.060 "data_size": 7936 00:34:57.060 } 00:34:57.060 ] 00:34:57.060 }' 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.060 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:57.060 [2024-10-28 13:46:11.216075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:57.318 [2024-10-28 13:46:11.222206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:34:57.318 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.318 13:46:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:34:57.318 [2024-10-28 13:46:11.224936] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:58.253 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:58.253 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:58.253 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:58.253 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:58.253 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:58.253 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.253 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:58.253 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.253 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:58.253 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.253 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:58.253 "name": "raid_bdev1", 00:34:58.253 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:34:58.253 "strip_size_kb": 0, 00:34:58.253 "state": "online", 00:34:58.253 "raid_level": "raid1", 00:34:58.253 "superblock": true, 00:34:58.253 "num_base_bdevs": 2, 00:34:58.253 "num_base_bdevs_discovered": 2, 00:34:58.253 "num_base_bdevs_operational": 2, 00:34:58.253 "process": { 00:34:58.253 "type": "rebuild", 00:34:58.253 "target": "spare", 00:34:58.253 "progress": { 00:34:58.253 "blocks": 2560, 00:34:58.253 "percent": 32 00:34:58.253 } 00:34:58.253 }, 00:34:58.253 "base_bdevs_list": [ 00:34:58.253 { 00:34:58.253 "name": "spare", 00:34:58.253 "uuid": "0a9d71fd-49b1-5a3b-8b1f-511f784a4c24", 00:34:58.253 "is_configured": true, 00:34:58.253 "data_offset": 256, 00:34:58.253 "data_size": 7936 00:34:58.253 }, 00:34:58.253 { 00:34:58.253 "name": "BaseBdev2", 00:34:58.253 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:34:58.253 "is_configured": true, 00:34:58.253 "data_offset": 256, 00:34:58.253 "data_size": 7936 00:34:58.253 } 00:34:58.253 ] 00:34:58.253 }' 00:34:58.253 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:58.253 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:58.253 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:34:58.254 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=706 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.254 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:58.512 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.512 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:58.512 "name": "raid_bdev1", 00:34:58.512 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:34:58.512 "strip_size_kb": 0, 00:34:58.512 "state": "online", 00:34:58.512 "raid_level": "raid1", 00:34:58.512 "superblock": true, 00:34:58.512 "num_base_bdevs": 2, 00:34:58.512 "num_base_bdevs_discovered": 2, 00:34:58.512 "num_base_bdevs_operational": 2, 00:34:58.512 "process": { 00:34:58.512 "type": "rebuild", 00:34:58.512 "target": "spare", 00:34:58.512 "progress": { 00:34:58.512 "blocks": 2816, 00:34:58.512 "percent": 35 00:34:58.512 } 00:34:58.512 }, 00:34:58.512 "base_bdevs_list": [ 00:34:58.512 { 00:34:58.512 "name": "spare", 00:34:58.512 "uuid": "0a9d71fd-49b1-5a3b-8b1f-511f784a4c24", 00:34:58.512 "is_configured": true, 00:34:58.512 "data_offset": 256, 00:34:58.512 "data_size": 7936 00:34:58.512 }, 00:34:58.512 { 00:34:58.512 "name": "BaseBdev2", 00:34:58.512 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:34:58.512 "is_configured": true, 00:34:58.512 "data_offset": 256, 00:34:58.512 "data_size": 7936 00:34:58.512 } 00:34:58.512 ] 00:34:58.512 }' 00:34:58.512 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:58.513 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:58.513 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:58.513 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:58.513 13:46:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:59.448 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:59.448 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:59.448 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:59.448 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:59.448 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:59.448 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:59.448 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:59.448 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.448 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:59.448 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:59.448 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.448 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:59.448 "name": "raid_bdev1", 00:34:59.448 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:34:59.448 "strip_size_kb": 0, 00:34:59.448 "state": "online", 00:34:59.448 "raid_level": "raid1", 00:34:59.448 "superblock": true, 00:34:59.448 "num_base_bdevs": 2, 00:34:59.448 "num_base_bdevs_discovered": 2, 00:34:59.448 "num_base_bdevs_operational": 2, 00:34:59.448 "process": { 00:34:59.448 "type": "rebuild", 00:34:59.448 "target": "spare", 00:34:59.448 "progress": { 00:34:59.448 "blocks": 5888, 00:34:59.448 "percent": 74 00:34:59.448 } 00:34:59.448 }, 00:34:59.448 "base_bdevs_list": [ 00:34:59.448 { 00:34:59.448 "name": "spare", 00:34:59.448 "uuid": "0a9d71fd-49b1-5a3b-8b1f-511f784a4c24", 00:34:59.448 "is_configured": true, 00:34:59.448 "data_offset": 256, 00:34:59.448 "data_size": 7936 00:34:59.448 }, 00:34:59.448 { 00:34:59.448 "name": "BaseBdev2", 00:34:59.448 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:34:59.448 "is_configured": true, 00:34:59.448 "data_offset": 256, 00:34:59.448 "data_size": 7936 00:34:59.448 } 00:34:59.448 ] 00:34:59.448 }' 00:34:59.448 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:59.706 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:59.706 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:59.706 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:59.706 13:46:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:00.273 [2024-10-28 13:46:14.348843] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:00.273 [2024-10-28 13:46:14.348955] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:00.273 [2024-10-28 13:46:14.349165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:00.840 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:00.840 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:00.840 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:00.840 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:00.840 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:00.840 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:00.840 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:00.840 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.840 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:00.840 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:00.840 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.840 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:00.840 "name": "raid_bdev1", 00:35:00.840 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:00.840 "strip_size_kb": 0, 00:35:00.840 "state": "online", 00:35:00.840 "raid_level": "raid1", 00:35:00.840 "superblock": true, 00:35:00.840 "num_base_bdevs": 2, 00:35:00.840 "num_base_bdevs_discovered": 2, 00:35:00.840 "num_base_bdevs_operational": 2, 00:35:00.840 "base_bdevs_list": [ 00:35:00.840 { 00:35:00.840 "name": "spare", 00:35:00.840 "uuid": "0a9d71fd-49b1-5a3b-8b1f-511f784a4c24", 00:35:00.840 "is_configured": true, 00:35:00.840 "data_offset": 256, 00:35:00.840 "data_size": 7936 00:35:00.840 }, 00:35:00.840 { 00:35:00.841 "name": "BaseBdev2", 00:35:00.841 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:00.841 "is_configured": true, 00:35:00.841 "data_offset": 256, 00:35:00.841 "data_size": 7936 00:35:00.841 } 00:35:00.841 ] 00:35:00.841 }' 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:00.841 "name": "raid_bdev1", 00:35:00.841 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:00.841 "strip_size_kb": 0, 00:35:00.841 "state": "online", 00:35:00.841 "raid_level": "raid1", 00:35:00.841 "superblock": true, 00:35:00.841 "num_base_bdevs": 2, 00:35:00.841 "num_base_bdevs_discovered": 2, 00:35:00.841 "num_base_bdevs_operational": 2, 00:35:00.841 "base_bdevs_list": [ 00:35:00.841 { 00:35:00.841 "name": "spare", 00:35:00.841 "uuid": "0a9d71fd-49b1-5a3b-8b1f-511f784a4c24", 00:35:00.841 "is_configured": true, 00:35:00.841 "data_offset": 256, 00:35:00.841 "data_size": 7936 00:35:00.841 }, 00:35:00.841 { 00:35:00.841 "name": "BaseBdev2", 00:35:00.841 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:00.841 "is_configured": true, 00:35:00.841 "data_offset": 256, 00:35:00.841 "data_size": 7936 00:35:00.841 } 00:35:00.841 ] 00:35:00.841 }' 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:00.841 13:46:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:01.099 "name": "raid_bdev1", 00:35:01.099 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:01.099 "strip_size_kb": 0, 00:35:01.099 "state": "online", 00:35:01.099 "raid_level": "raid1", 00:35:01.099 "superblock": true, 00:35:01.099 "num_base_bdevs": 2, 00:35:01.099 "num_base_bdevs_discovered": 2, 00:35:01.099 "num_base_bdevs_operational": 2, 00:35:01.099 "base_bdevs_list": [ 00:35:01.099 { 00:35:01.099 "name": "spare", 00:35:01.099 "uuid": "0a9d71fd-49b1-5a3b-8b1f-511f784a4c24", 00:35:01.099 "is_configured": true, 00:35:01.099 "data_offset": 256, 00:35:01.099 "data_size": 7936 00:35:01.099 }, 00:35:01.099 { 00:35:01.099 "name": "BaseBdev2", 00:35:01.099 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:01.099 "is_configured": true, 00:35:01.099 "data_offset": 256, 00:35:01.099 "data_size": 7936 00:35:01.099 } 00:35:01.099 ] 00:35:01.099 }' 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:01.099 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.667 [2024-10-28 13:46:15.527657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:01.667 [2024-10-28 13:46:15.527939] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:01.667 [2024-10-28 13:46:15.528102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:01.667 [2024-10-28 13:46:15.528252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:01.667 [2024-10-28 13:46:15.528273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.667 [2024-10-28 13:46:15.607641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:01.667 [2024-10-28 13:46:15.607748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:01.667 [2024-10-28 13:46:15.607778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:35:01.667 [2024-10-28 13:46:15.607792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:01.667 [2024-10-28 13:46:15.610560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:01.667 [2024-10-28 13:46:15.610599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:01.667 [2024-10-28 13:46:15.610692] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:01.667 [2024-10-28 13:46:15.610739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:01.667 [2024-10-28 13:46:15.610876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:01.667 spare 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.667 [2024-10-28 13:46:15.710996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:35:01.667 [2024-10-28 13:46:15.711053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:35:01.667 [2024-10-28 13:46:15.711245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:35:01.667 [2024-10-28 13:46:15.711383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:35:01.667 [2024-10-28 13:46:15.711423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:35:01.667 [2024-10-28 13:46:15.711550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:01.667 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:01.668 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:01.668 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:01.668 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:01.668 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:01.668 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:01.668 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:01.668 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.668 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:01.668 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.668 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:01.668 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.668 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:01.668 "name": "raid_bdev1", 00:35:01.668 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:01.668 "strip_size_kb": 0, 00:35:01.668 "state": "online", 00:35:01.668 "raid_level": "raid1", 00:35:01.668 "superblock": true, 00:35:01.668 "num_base_bdevs": 2, 00:35:01.668 "num_base_bdevs_discovered": 2, 00:35:01.668 "num_base_bdevs_operational": 2, 00:35:01.668 "base_bdevs_list": [ 00:35:01.668 { 00:35:01.668 "name": "spare", 00:35:01.668 "uuid": "0a9d71fd-49b1-5a3b-8b1f-511f784a4c24", 00:35:01.668 "is_configured": true, 00:35:01.668 "data_offset": 256, 00:35:01.668 "data_size": 7936 00:35:01.668 }, 00:35:01.668 { 00:35:01.668 "name": "BaseBdev2", 00:35:01.668 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:01.668 "is_configured": true, 00:35:01.668 "data_offset": 256, 00:35:01.668 "data_size": 7936 00:35:01.668 } 00:35:01.668 ] 00:35:01.668 }' 00:35:01.668 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:01.668 13:46:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:02.235 "name": "raid_bdev1", 00:35:02.235 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:02.235 "strip_size_kb": 0, 00:35:02.235 "state": "online", 00:35:02.235 "raid_level": "raid1", 00:35:02.235 "superblock": true, 00:35:02.235 "num_base_bdevs": 2, 00:35:02.235 "num_base_bdevs_discovered": 2, 00:35:02.235 "num_base_bdevs_operational": 2, 00:35:02.235 "base_bdevs_list": [ 00:35:02.235 { 00:35:02.235 "name": "spare", 00:35:02.235 "uuid": "0a9d71fd-49b1-5a3b-8b1f-511f784a4c24", 00:35:02.235 "is_configured": true, 00:35:02.235 "data_offset": 256, 00:35:02.235 "data_size": 7936 00:35:02.235 }, 00:35:02.235 { 00:35:02.235 "name": "BaseBdev2", 00:35:02.235 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:02.235 "is_configured": true, 00:35:02.235 "data_offset": 256, 00:35:02.235 "data_size": 7936 00:35:02.235 } 00:35:02.235 ] 00:35:02.235 }' 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.235 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:02.532 [2024-10-28 13:46:16.440046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:02.532 "name": "raid_bdev1", 00:35:02.532 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:02.532 "strip_size_kb": 0, 00:35:02.532 "state": "online", 00:35:02.532 "raid_level": "raid1", 00:35:02.532 "superblock": true, 00:35:02.532 "num_base_bdevs": 2, 00:35:02.532 "num_base_bdevs_discovered": 1, 00:35:02.532 "num_base_bdevs_operational": 1, 00:35:02.532 "base_bdevs_list": [ 00:35:02.532 { 00:35:02.532 "name": null, 00:35:02.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.532 "is_configured": false, 00:35:02.532 "data_offset": 0, 00:35:02.532 "data_size": 7936 00:35:02.532 }, 00:35:02.532 { 00:35:02.532 "name": "BaseBdev2", 00:35:02.532 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:02.532 "is_configured": true, 00:35:02.532 "data_offset": 256, 00:35:02.532 "data_size": 7936 00:35:02.532 } 00:35:02.532 ] 00:35:02.532 }' 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:02.532 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:03.098 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:03.098 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.098 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:03.098 [2024-10-28 13:46:16.964324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:03.098 [2024-10-28 13:46:16.964626] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:03.098 [2024-10-28 13:46:16.964649] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:03.098 [2024-10-28 13:46:16.964713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:03.098 [2024-10-28 13:46:16.970058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:35:03.098 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.098 13:46:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:35:03.098 [2024-10-28 13:46:16.972641] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:04.034 13:46:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:04.034 13:46:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:04.034 13:46:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:04.034 13:46:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:04.034 13:46:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:04.034 13:46:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:04.034 13:46:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.034 13:46:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:04.034 13:46:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:04.034 13:46:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.034 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:04.034 "name": "raid_bdev1", 00:35:04.034 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:04.034 "strip_size_kb": 0, 00:35:04.034 "state": "online", 00:35:04.034 "raid_level": "raid1", 00:35:04.034 "superblock": true, 00:35:04.034 "num_base_bdevs": 2, 00:35:04.034 "num_base_bdevs_discovered": 2, 00:35:04.034 "num_base_bdevs_operational": 2, 00:35:04.034 "process": { 00:35:04.034 "type": "rebuild", 00:35:04.034 "target": "spare", 00:35:04.034 "progress": { 00:35:04.034 "blocks": 2560, 00:35:04.034 "percent": 32 00:35:04.034 } 00:35:04.034 }, 00:35:04.034 "base_bdevs_list": [ 00:35:04.034 { 00:35:04.034 "name": "spare", 00:35:04.034 "uuid": "0a9d71fd-49b1-5a3b-8b1f-511f784a4c24", 00:35:04.034 "is_configured": true, 00:35:04.034 "data_offset": 256, 00:35:04.034 "data_size": 7936 00:35:04.034 }, 00:35:04.034 { 00:35:04.034 "name": "BaseBdev2", 00:35:04.034 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:04.034 "is_configured": true, 00:35:04.034 "data_offset": 256, 00:35:04.034 "data_size": 7936 00:35:04.034 } 00:35:04.034 ] 00:35:04.034 }' 00:35:04.034 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:04.034 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:04.035 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:04.035 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:04.035 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:35:04.035 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.035 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:04.035 [2024-10-28 13:46:18.138712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:04.035 [2024-10-28 13:46:18.181195] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:04.035 [2024-10-28 13:46:18.181296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:04.035 [2024-10-28 13:46:18.181321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:04.035 [2024-10-28 13:46:18.181337] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:04.293 "name": "raid_bdev1", 00:35:04.293 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:04.293 "strip_size_kb": 0, 00:35:04.293 "state": "online", 00:35:04.293 "raid_level": "raid1", 00:35:04.293 "superblock": true, 00:35:04.293 "num_base_bdevs": 2, 00:35:04.293 "num_base_bdevs_discovered": 1, 00:35:04.293 "num_base_bdevs_operational": 1, 00:35:04.293 "base_bdevs_list": [ 00:35:04.293 { 00:35:04.293 "name": null, 00:35:04.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.293 "is_configured": false, 00:35:04.293 "data_offset": 0, 00:35:04.293 "data_size": 7936 00:35:04.293 }, 00:35:04.293 { 00:35:04.293 "name": "BaseBdev2", 00:35:04.293 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:04.293 "is_configured": true, 00:35:04.293 "data_offset": 256, 00:35:04.293 "data_size": 7936 00:35:04.293 } 00:35:04.293 ] 00:35:04.293 }' 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:04.293 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:04.860 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:04.860 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.860 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:04.860 [2024-10-28 13:46:18.722869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:04.860 [2024-10-28 13:46:18.723126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:04.860 [2024-10-28 13:46:18.723187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:04.860 [2024-10-28 13:46:18.723208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:04.860 [2024-10-28 13:46:18.723479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:04.860 [2024-10-28 13:46:18.723510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:04.860 [2024-10-28 13:46:18.723595] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:04.860 [2024-10-28 13:46:18.723618] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:04.860 [2024-10-28 13:46:18.723636] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:04.860 [2024-10-28 13:46:18.723680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:04.860 [2024-10-28 13:46:18.728773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:35:04.860 spare 00:35:04.860 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.860 13:46:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:35:04.860 [2024-10-28 13:46:18.731490] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:05.795 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:05.795 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:05.795 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:05.795 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:05.795 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:05.795 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:05.795 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:05.795 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.795 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:05.795 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.795 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:05.795 "name": "raid_bdev1", 00:35:05.795 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:05.795 "strip_size_kb": 0, 00:35:05.795 "state": "online", 00:35:05.795 "raid_level": "raid1", 00:35:05.795 "superblock": true, 00:35:05.795 "num_base_bdevs": 2, 00:35:05.795 "num_base_bdevs_discovered": 2, 00:35:05.795 "num_base_bdevs_operational": 2, 00:35:05.795 "process": { 00:35:05.795 "type": "rebuild", 00:35:05.795 "target": "spare", 00:35:05.795 "progress": { 00:35:05.795 "blocks": 2560, 00:35:05.795 "percent": 32 00:35:05.795 } 00:35:05.795 }, 00:35:05.795 "base_bdevs_list": [ 00:35:05.795 { 00:35:05.796 "name": "spare", 00:35:05.796 "uuid": "0a9d71fd-49b1-5a3b-8b1f-511f784a4c24", 00:35:05.796 "is_configured": true, 00:35:05.796 "data_offset": 256, 00:35:05.796 "data_size": 7936 00:35:05.796 }, 00:35:05.796 { 00:35:05.796 "name": "BaseBdev2", 00:35:05.796 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:05.796 "is_configured": true, 00:35:05.796 "data_offset": 256, 00:35:05.796 "data_size": 7936 00:35:05.796 } 00:35:05.796 ] 00:35:05.796 }' 00:35:05.796 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:05.796 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:05.796 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:05.796 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:05.796 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:35:05.796 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.796 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:05.796 [2024-10-28 13:46:19.901701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:05.796 [2024-10-28 13:46:19.940077] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:05.796 [2024-10-28 13:46:19.940763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:05.796 [2024-10-28 13:46:19.940925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:05.796 [2024-10-28 13:46:19.940979] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:05.796 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.796 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:05.796 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:06.054 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:06.054 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:06.054 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:06.054 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:06.054 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:06.054 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:06.054 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:06.054 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:06.054 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:06.054 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.054 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:06.054 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:06.054 13:46:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.054 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:06.054 "name": "raid_bdev1", 00:35:06.054 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:06.054 "strip_size_kb": 0, 00:35:06.054 "state": "online", 00:35:06.054 "raid_level": "raid1", 00:35:06.054 "superblock": true, 00:35:06.054 "num_base_bdevs": 2, 00:35:06.054 "num_base_bdevs_discovered": 1, 00:35:06.054 "num_base_bdevs_operational": 1, 00:35:06.054 "base_bdevs_list": [ 00:35:06.054 { 00:35:06.054 "name": null, 00:35:06.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:06.054 "is_configured": false, 00:35:06.054 "data_offset": 0, 00:35:06.054 "data_size": 7936 00:35:06.054 }, 00:35:06.054 { 00:35:06.054 "name": "BaseBdev2", 00:35:06.054 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:06.054 "is_configured": true, 00:35:06.054 "data_offset": 256, 00:35:06.054 "data_size": 7936 00:35:06.054 } 00:35:06.054 ] 00:35:06.054 }' 00:35:06.054 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:06.054 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:06.312 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:06.312 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:06.312 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:06.312 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:06.312 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:06.312 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:06.312 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:06.312 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.312 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:06.570 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.570 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:06.570 "name": "raid_bdev1", 00:35:06.570 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:06.570 "strip_size_kb": 0, 00:35:06.570 "state": "online", 00:35:06.570 "raid_level": "raid1", 00:35:06.570 "superblock": true, 00:35:06.570 "num_base_bdevs": 2, 00:35:06.570 "num_base_bdevs_discovered": 1, 00:35:06.570 "num_base_bdevs_operational": 1, 00:35:06.570 "base_bdevs_list": [ 00:35:06.570 { 00:35:06.570 "name": null, 00:35:06.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:06.570 "is_configured": false, 00:35:06.570 "data_offset": 0, 00:35:06.570 "data_size": 7936 00:35:06.570 }, 00:35:06.570 { 00:35:06.570 "name": "BaseBdev2", 00:35:06.570 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:06.570 "is_configured": true, 00:35:06.570 "data_offset": 256, 00:35:06.570 "data_size": 7936 00:35:06.570 } 00:35:06.570 ] 00:35:06.570 }' 00:35:06.570 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:06.570 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:06.570 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:06.571 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:06.571 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:35:06.571 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.571 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:06.571 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.571 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:06.571 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.571 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:06.571 [2024-10-28 13:46:20.622503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:06.571 [2024-10-28 13:46:20.622732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:06.571 [2024-10-28 13:46:20.622785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:35:06.571 [2024-10-28 13:46:20.622801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:06.571 [2024-10-28 13:46:20.623005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:06.571 [2024-10-28 13:46:20.623028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:06.571 [2024-10-28 13:46:20.623099] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:35:06.571 [2024-10-28 13:46:20.623118] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:35:06.571 [2024-10-28 13:46:20.623149] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:06.571 [2024-10-28 13:46:20.623165] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:35:06.571 BaseBdev1 00:35:06.571 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.571 13:46:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:07.504 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.762 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:07.762 "name": "raid_bdev1", 00:35:07.762 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:07.762 "strip_size_kb": 0, 00:35:07.762 "state": "online", 00:35:07.762 "raid_level": "raid1", 00:35:07.762 "superblock": true, 00:35:07.762 "num_base_bdevs": 2, 00:35:07.762 "num_base_bdevs_discovered": 1, 00:35:07.762 "num_base_bdevs_operational": 1, 00:35:07.762 "base_bdevs_list": [ 00:35:07.762 { 00:35:07.762 "name": null, 00:35:07.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.762 "is_configured": false, 00:35:07.762 "data_offset": 0, 00:35:07.762 "data_size": 7936 00:35:07.762 }, 00:35:07.762 { 00:35:07.762 "name": "BaseBdev2", 00:35:07.762 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:07.762 "is_configured": true, 00:35:07.762 "data_offset": 256, 00:35:07.762 "data_size": 7936 00:35:07.762 } 00:35:07.762 ] 00:35:07.762 }' 00:35:07.762 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:07.762 13:46:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:08.020 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:08.020 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:08.020 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:08.020 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:08.020 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:08.020 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:08.020 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:08.020 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.020 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:08.020 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:08.292 "name": "raid_bdev1", 00:35:08.292 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:08.292 "strip_size_kb": 0, 00:35:08.292 "state": "online", 00:35:08.292 "raid_level": "raid1", 00:35:08.292 "superblock": true, 00:35:08.292 "num_base_bdevs": 2, 00:35:08.292 "num_base_bdevs_discovered": 1, 00:35:08.292 "num_base_bdevs_operational": 1, 00:35:08.292 "base_bdevs_list": [ 00:35:08.292 { 00:35:08.292 "name": null, 00:35:08.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.292 "is_configured": false, 00:35:08.292 "data_offset": 0, 00:35:08.292 "data_size": 7936 00:35:08.292 }, 00:35:08.292 { 00:35:08.292 "name": "BaseBdev2", 00:35:08.292 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:08.292 "is_configured": true, 00:35:08.292 "data_offset": 256, 00:35:08.292 "data_size": 7936 00:35:08.292 } 00:35:08.292 ] 00:35:08.292 }' 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:08.292 [2024-10-28 13:46:22.326979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:08.292 [2024-10-28 13:46:22.327225] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:35:08.292 [2024-10-28 13:46:22.327253] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:08.292 request: 00:35:08.292 { 00:35:08.292 "base_bdev": "BaseBdev1", 00:35:08.292 "raid_bdev": "raid_bdev1", 00:35:08.292 "method": "bdev_raid_add_base_bdev", 00:35:08.292 "req_id": 1 00:35:08.292 } 00:35:08.292 Got JSON-RPC error response 00:35:08.292 response: 00:35:08.292 { 00:35:08.292 "code": -22, 00:35:08.292 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:35:08.292 } 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:08.292 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:08.293 13:46:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:09.244 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.502 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:09.502 "name": "raid_bdev1", 00:35:09.502 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:09.502 "strip_size_kb": 0, 00:35:09.502 "state": "online", 00:35:09.502 "raid_level": "raid1", 00:35:09.502 "superblock": true, 00:35:09.502 "num_base_bdevs": 2, 00:35:09.502 "num_base_bdevs_discovered": 1, 00:35:09.502 "num_base_bdevs_operational": 1, 00:35:09.502 "base_bdevs_list": [ 00:35:09.502 { 00:35:09.502 "name": null, 00:35:09.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:09.502 "is_configured": false, 00:35:09.502 "data_offset": 0, 00:35:09.502 "data_size": 7936 00:35:09.502 }, 00:35:09.502 { 00:35:09.502 "name": "BaseBdev2", 00:35:09.502 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:09.502 "is_configured": true, 00:35:09.502 "data_offset": 256, 00:35:09.502 "data_size": 7936 00:35:09.502 } 00:35:09.502 ] 00:35:09.502 }' 00:35:09.502 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:09.502 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:10.067 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:10.067 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:10.067 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:10.067 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:10.067 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:10.067 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.067 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.067 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:10.067 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:10.067 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.067 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:10.067 "name": "raid_bdev1", 00:35:10.067 "uuid": "5b999ee1-532b-450b-a119-6784d7305c90", 00:35:10.067 "strip_size_kb": 0, 00:35:10.067 "state": "online", 00:35:10.067 "raid_level": "raid1", 00:35:10.067 "superblock": true, 00:35:10.067 "num_base_bdevs": 2, 00:35:10.067 "num_base_bdevs_discovered": 1, 00:35:10.067 "num_base_bdevs_operational": 1, 00:35:10.067 "base_bdevs_list": [ 00:35:10.067 { 00:35:10.067 "name": null, 00:35:10.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:10.067 "is_configured": false, 00:35:10.067 "data_offset": 0, 00:35:10.067 "data_size": 7936 00:35:10.067 }, 00:35:10.067 { 00:35:10.067 "name": "BaseBdev2", 00:35:10.067 "uuid": "0d8b5aa4-df05-51bc-bc59-ac5ad21c3e6c", 00:35:10.067 "is_configured": true, 00:35:10.067 "data_offset": 256, 00:35:10.067 "data_size": 7936 00:35:10.067 } 00:35:10.067 ] 00:35:10.067 }' 00:35:10.067 13:46:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:10.067 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:10.067 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:10.067 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:10.067 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 101706 00:35:10.067 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 101706 ']' 00:35:10.067 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 101706 00:35:10.067 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:35:10.067 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:10.067 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101706 00:35:10.067 killing process with pid 101706 00:35:10.068 Received shutdown signal, test time was about 60.000000 seconds 00:35:10.068 00:35:10.068 Latency(us) 00:35:10.068 [2024-10-28T13:46:24.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.068 [2024-10-28T13:46:24.228Z] =================================================================================================================== 00:35:10.068 [2024-10-28T13:46:24.228Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:10.068 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:10.068 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:10.068 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101706' 00:35:10.068 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 101706 00:35:10.068 [2024-10-28 13:46:24.120951] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:10.068 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 101706 00:35:10.068 [2024-10-28 13:46:24.121115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:10.068 [2024-10-28 13:46:24.121199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:10.068 [2024-10-28 13:46:24.121221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:35:10.068 [2024-10-28 13:46:24.160031] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:10.326 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:35:10.326 00:35:10.326 real 0m17.451s 00:35:10.326 user 0m24.268s 00:35:10.326 sys 0m1.510s 00:35:10.326 ************************************ 00:35:10.326 END TEST raid_rebuild_test_sb_md_interleaved 00:35:10.326 ************************************ 00:35:10.326 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:10.326 13:46:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:35:10.326 13:46:24 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:35:10.326 13:46:24 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:35:10.326 13:46:24 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 101706 ']' 00:35:10.326 13:46:24 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 101706 00:35:10.326 13:46:24 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:35:10.326 ************************************ 00:35:10.326 END TEST bdev_raid 00:35:10.326 ************************************ 00:35:10.326 00:35:10.326 real 11m27.884s 00:35:10.326 user 16m55.679s 00:35:10.326 sys 1m46.170s 00:35:10.326 13:46:24 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:10.326 13:46:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:10.584 13:46:24 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:35:10.584 13:46:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:10.584 13:46:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:10.584 13:46:24 -- common/autotest_common.sh@10 -- # set +x 00:35:10.584 ************************************ 00:35:10.584 START TEST spdkcli_raid 00:35:10.584 ************************************ 00:35:10.584 13:46:24 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:35:10.584 * Looking for test storage... 00:35:10.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:35:10.584 13:46:24 spdkcli_raid -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:35:10.584 13:46:24 spdkcli_raid -- common/autotest_common.sh@1689 -- # lcov --version 00:35:10.584 13:46:24 spdkcli_raid -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:35:10.584 13:46:24 spdkcli_raid -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:10.584 13:46:24 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:35:10.584 13:46:24 spdkcli_raid -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:10.584 13:46:24 spdkcli_raid -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:35:10.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.584 --rc genhtml_branch_coverage=1 00:35:10.584 --rc genhtml_function_coverage=1 00:35:10.584 --rc genhtml_legend=1 00:35:10.584 --rc geninfo_all_blocks=1 00:35:10.584 --rc geninfo_unexecuted_blocks=1 00:35:10.584 00:35:10.584 ' 00:35:10.584 13:46:24 spdkcli_raid -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:35:10.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.584 --rc genhtml_branch_coverage=1 00:35:10.584 --rc genhtml_function_coverage=1 00:35:10.584 --rc genhtml_legend=1 00:35:10.584 --rc geninfo_all_blocks=1 00:35:10.585 --rc geninfo_unexecuted_blocks=1 00:35:10.585 00:35:10.585 ' 00:35:10.585 13:46:24 spdkcli_raid -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:35:10.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.585 --rc genhtml_branch_coverage=1 00:35:10.585 --rc genhtml_function_coverage=1 00:35:10.585 --rc genhtml_legend=1 00:35:10.585 --rc geninfo_all_blocks=1 00:35:10.585 --rc geninfo_unexecuted_blocks=1 00:35:10.585 00:35:10.585 ' 00:35:10.585 13:46:24 spdkcli_raid -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:35:10.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.585 --rc genhtml_branch_coverage=1 00:35:10.585 --rc genhtml_function_coverage=1 00:35:10.585 --rc genhtml_legend=1 00:35:10.585 --rc geninfo_all_blocks=1 00:35:10.585 --rc geninfo_unexecuted_blocks=1 00:35:10.585 00:35:10.585 ' 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:35:10.585 13:46:24 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:35:10.585 13:46:24 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:10.585 13:46:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:10.585 13:46:24 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:35:10.843 13:46:24 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=102383 00:35:10.843 13:46:24 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 102383 00:35:10.843 13:46:24 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:35:10.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:10.843 13:46:24 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 102383 ']' 00:35:10.843 13:46:24 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.843 13:46:24 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:10.843 13:46:24 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:10.843 13:46:24 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:10.843 13:46:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:10.843 [2024-10-28 13:46:24.862940] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:35:10.843 [2024-10-28 13:46:24.863132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102383 ] 00:35:11.101 [2024-10-28 13:46:25.017386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:11.101 [2024-10-28 13:46:25.047636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:11.101 [2024-10-28 13:46:25.090640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.101 [2024-10-28 13:46:25.090698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.036 13:46:25 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:12.036 13:46:25 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:35:12.036 13:46:25 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:35:12.036 13:46:25 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:12.036 13:46:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:12.036 13:46:25 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:35:12.036 13:46:25 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:12.036 13:46:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:12.036 13:46:25 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:12.036 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:12.036 ' 00:35:13.412 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:35:13.412 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:35:13.670 13:46:27 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:35:13.670 13:46:27 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:13.670 13:46:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:13.670 13:46:27 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:35:13.670 13:46:27 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:13.670 13:46:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:13.670 13:46:27 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:35:13.670 ' 00:35:14.605 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:35:14.878 13:46:28 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:35:14.879 13:46:28 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:14.879 13:46:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:14.879 13:46:28 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:35:14.879 13:46:28 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:14.879 13:46:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:14.879 13:46:28 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:35:14.879 13:46:28 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:35:15.445 13:46:29 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:35:15.445 13:46:29 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:35:15.445 13:46:29 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:35:15.445 13:46:29 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:15.445 13:46:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:15.445 13:46:29 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:35:15.445 13:46:29 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:15.445 13:46:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:15.445 13:46:29 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:35:15.445 ' 00:35:16.826 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:35:16.826 13:46:30 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:35:16.826 13:46:30 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:16.826 13:46:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:16.826 13:46:30 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:35:16.826 13:46:30 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:16.826 13:46:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:16.826 13:46:30 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:35:16.826 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:35:16.826 ' 00:35:18.204 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:35:18.204 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:35:18.204 13:46:32 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:35:18.204 13:46:32 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:18.204 13:46:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:18.204 13:46:32 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 102383 00:35:18.204 13:46:32 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 102383 ']' 00:35:18.204 13:46:32 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 102383 00:35:18.204 13:46:32 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:35:18.204 13:46:32 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:18.204 13:46:32 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102383 00:35:18.204 killing process with pid 102383 00:35:18.204 13:46:32 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:18.204 13:46:32 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:18.204 13:46:32 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102383' 00:35:18.204 13:46:32 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 102383 00:35:18.204 13:46:32 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 102383 00:35:18.771 13:46:32 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:35:18.771 13:46:32 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 102383 ']' 00:35:18.771 13:46:32 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 102383 00:35:18.771 13:46:32 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 102383 ']' 00:35:18.771 13:46:32 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 102383 00:35:18.771 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (102383) - No such process 00:35:18.771 Process with pid 102383 is not found 00:35:18.771 13:46:32 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 102383 is not found' 00:35:18.771 13:46:32 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:35:18.771 13:46:32 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:18.771 13:46:32 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:18.771 13:46:32 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:18.771 ************************************ 00:35:18.771 END TEST spdkcli_raid 00:35:18.771 ************************************ 00:35:18.771 00:35:18.771 real 0m8.249s 00:35:18.771 user 0m17.714s 00:35:18.771 sys 0m1.066s 00:35:18.771 13:46:32 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:18.771 13:46:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:35:18.771 13:46:32 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:35:18.771 13:46:32 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:18.771 13:46:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:18.771 13:46:32 -- common/autotest_common.sh@10 -- # set +x 00:35:18.771 ************************************ 00:35:18.771 START TEST blockdev_raid5f 00:35:18.771 ************************************ 00:35:18.771 13:46:32 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:35:18.771 * Looking for test storage... 00:35:18.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:35:18.771 13:46:32 blockdev_raid5f -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:35:18.771 13:46:32 blockdev_raid5f -- common/autotest_common.sh@1689 -- # lcov --version 00:35:18.771 13:46:32 blockdev_raid5f -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:35:19.030 13:46:32 blockdev_raid5f -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:35:19.030 13:46:32 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:19.030 13:46:33 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:35:19.030 13:46:33 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:35:19.030 13:46:33 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:35:19.030 13:46:33 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:35:19.030 13:46:33 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:19.030 13:46:33 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:35:19.030 13:46:33 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:35:19.030 13:46:33 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:19.030 13:46:33 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:19.030 13:46:33 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:35:19.030 13:46:33 blockdev_raid5f -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:19.030 13:46:33 blockdev_raid5f -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:35:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.031 --rc genhtml_branch_coverage=1 00:35:19.031 --rc genhtml_function_coverage=1 00:35:19.031 --rc genhtml_legend=1 00:35:19.031 --rc geninfo_all_blocks=1 00:35:19.031 --rc geninfo_unexecuted_blocks=1 00:35:19.031 00:35:19.031 ' 00:35:19.031 13:46:33 blockdev_raid5f -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:35:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.031 --rc genhtml_branch_coverage=1 00:35:19.031 --rc genhtml_function_coverage=1 00:35:19.031 --rc genhtml_legend=1 00:35:19.031 --rc geninfo_all_blocks=1 00:35:19.031 --rc geninfo_unexecuted_blocks=1 00:35:19.031 00:35:19.031 ' 00:35:19.031 13:46:33 blockdev_raid5f -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:35:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.031 --rc genhtml_branch_coverage=1 00:35:19.031 --rc genhtml_function_coverage=1 00:35:19.031 --rc genhtml_legend=1 00:35:19.031 --rc geninfo_all_blocks=1 00:35:19.031 --rc geninfo_unexecuted_blocks=1 00:35:19.031 00:35:19.031 ' 00:35:19.031 13:46:33 blockdev_raid5f -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:35:19.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.031 --rc genhtml_branch_coverage=1 00:35:19.031 --rc genhtml_function_coverage=1 00:35:19.031 --rc genhtml_legend=1 00:35:19.031 --rc geninfo_all_blocks=1 00:35:19.031 --rc geninfo_unexecuted_blocks=1 00:35:19.031 00:35:19.031 ' 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=102641 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:35:19.031 13:46:33 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 102641 00:35:19.031 13:46:33 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 102641 ']' 00:35:19.031 13:46:33 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.031 13:46:33 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:19.031 13:46:33 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.031 13:46:33 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:19.031 13:46:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:19.031 [2024-10-28 13:46:33.124316] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:35:19.031 [2024-10-28 13:46:33.124751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102641 ] 00:35:19.290 [2024-10-28 13:46:33.265459] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:19.290 [2024-10-28 13:46:33.294163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.290 [2024-10-28 13:46:33.339729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:20.253 Malloc0 00:35:20.253 Malloc1 00:35:20.253 Malloc2 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "7a1e3ba4-2415-4023-ad8d-263a255a5a2b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7a1e3ba4-2415-4023-ad8d-263a255a5a2b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "7a1e3ba4-2415-4023-ad8d-263a255a5a2b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "53b52392-8e99-488d-923c-4369331d0151",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "fbf16643-531d-4a08-a609-c9780c2c245e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "300e8131-f69c-4527-b8ad-d7971f204c2d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:35:20.253 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 102641 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 102641 ']' 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 102641 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102641 00:35:20.253 killing process with pid 102641 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102641' 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 102641 00:35:20.253 13:46:34 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 102641 00:35:20.819 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:20.819 13:46:34 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:35:20.819 13:46:34 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:35:20.819 13:46:34 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:20.819 13:46:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:20.819 ************************************ 00:35:20.819 START TEST bdev_hello_world 00:35:20.819 ************************************ 00:35:20.819 13:46:34 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:35:20.819 [2024-10-28 13:46:34.905340] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:35:20.820 [2024-10-28 13:46:34.905508] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102685 ] 00:35:21.077 [2024-10-28 13:46:35.046880] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:21.078 [2024-10-28 13:46:35.074176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.078 [2024-10-28 13:46:35.112174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.336 [2024-10-28 13:46:35.318651] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:35:21.336 [2024-10-28 13:46:35.318714] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:35:21.336 [2024-10-28 13:46:35.318765] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:35:21.336 [2024-10-28 13:46:35.319124] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:35:21.336 [2024-10-28 13:46:35.319367] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:35:21.336 [2024-10-28 13:46:35.319441] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:35:21.336 [2024-10-28 13:46:35.319527] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:35:21.336 00:35:21.336 [2024-10-28 13:46:35.319557] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:35:21.594 00:35:21.594 real 0m0.758s 00:35:21.594 user 0m0.411s 00:35:21.594 sys 0m0.241s 00:35:21.594 13:46:35 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:21.594 13:46:35 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:35:21.594 ************************************ 00:35:21.594 END TEST bdev_hello_world 00:35:21.594 ************************************ 00:35:21.594 13:46:35 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:35:21.594 13:46:35 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:21.594 13:46:35 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:21.594 13:46:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:21.594 ************************************ 00:35:21.594 START TEST bdev_bounds 00:35:21.594 ************************************ 00:35:21.594 Process bdevio pid: 102706 00:35:21.594 13:46:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:35:21.594 13:46:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=102706 00:35:21.594 13:46:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:35:21.594 13:46:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:21.594 13:46:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 102706' 00:35:21.594 13:46:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 102706 00:35:21.594 13:46:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 102706 ']' 00:35:21.594 13:46:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:21.594 13:46:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:21.594 13:46:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:21.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:21.594 13:46:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:21.594 13:46:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:35:21.594 [2024-10-28 13:46:35.743662] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:35:21.594 [2024-10-28 13:46:35.743885] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102706 ] 00:35:21.852 [2024-10-28 13:46:35.899379] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:21.852 [2024-10-28 13:46:35.928697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:21.852 [2024-10-28 13:46:35.983129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:21.852 [2024-10-28 13:46:35.983209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.852 [2024-10-28 13:46:35.983294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:22.836 13:46:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:22.836 13:46:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:35:22.836 13:46:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:35:22.836 I/O targets: 00:35:22.836 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:35:22.836 00:35:22.836 00:35:22.836 CUnit - A unit testing framework for C - Version 2.1-3 00:35:22.836 http://cunit.sourceforge.net/ 00:35:22.836 00:35:22.836 00:35:22.836 Suite: bdevio tests on: raid5f 00:35:22.836 Test: blockdev write read block ...passed 00:35:22.836 Test: blockdev write zeroes read block ...passed 00:35:22.836 Test: blockdev write zeroes read no split ...passed 00:35:22.836 Test: blockdev write zeroes read split ...passed 00:35:23.096 Test: blockdev write zeroes read split partial ...passed 00:35:23.096 Test: blockdev reset ...passed 00:35:23.096 Test: blockdev write read 8 blocks ...passed 00:35:23.096 Test: blockdev write read size > 128k ...passed 00:35:23.096 Test: blockdev write read invalid size ...passed 00:35:23.096 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:23.096 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:23.096 Test: blockdev write read max offset ...passed 00:35:23.096 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:23.096 Test: blockdev writev readv 8 blocks ...passed 00:35:23.096 Test: blockdev writev readv 30 x 1block ...passed 00:35:23.096 Test: blockdev writev readv block ...passed 00:35:23.096 Test: blockdev writev readv size > 128k ...passed 00:35:23.096 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:23.096 Test: blockdev comparev and writev ...passed 00:35:23.096 Test: blockdev nvme passthru rw ...passed 00:35:23.096 Test: blockdev nvme passthru vendor specific ...passed 00:35:23.096 Test: blockdev nvme admin passthru ...passed 00:35:23.096 Test: blockdev copy ...passed 00:35:23.096 00:35:23.096 Run Summary: Type Total Ran Passed Failed Inactive 00:35:23.096 suites 1 1 n/a 0 0 00:35:23.096 tests 23 23 23 0 0 00:35:23.096 asserts 130 130 130 0 n/a 00:35:23.096 00:35:23.096 Elapsed time = 0.387 seconds 00:35:23.096 0 00:35:23.096 13:46:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 102706 00:35:23.096 13:46:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 102706 ']' 00:35:23.096 13:46:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 102706 00:35:23.096 13:46:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:35:23.096 13:46:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:23.096 13:46:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102706 00:35:23.096 13:46:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:23.096 13:46:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:23.096 13:46:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102706' 00:35:23.096 killing process with pid 102706 00:35:23.096 13:46:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 102706 00:35:23.096 13:46:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 102706 00:35:23.356 13:46:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:35:23.356 00:35:23.356 real 0m1.714s 00:35:23.356 user 0m4.289s 00:35:23.356 sys 0m0.396s 00:35:23.356 13:46:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:23.356 ************************************ 00:35:23.356 END TEST bdev_bounds 00:35:23.356 ************************************ 00:35:23.356 13:46:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:35:23.356 13:46:37 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:35:23.356 13:46:37 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:35:23.356 13:46:37 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:23.356 13:46:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:23.356 ************************************ 00:35:23.356 START TEST bdev_nbd 00:35:23.356 ************************************ 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=102760 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 102760 /var/tmp/spdk-nbd.sock 00:35:23.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 102760 ']' 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:23.356 13:46:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:35:23.356 [2024-10-28 13:46:37.488375] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:35:23.356 [2024-10-28 13:46:37.488789] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:23.615 [2024-10-28 13:46:37.633086] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:23.615 [2024-10-28 13:46:37.657036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.615 [2024-10-28 13:46:37.700518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:35:24.547 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:24.805 1+0 records in 00:35:24.805 1+0 records out 00:35:24.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670672 s, 6.1 MB/s 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:35:24.805 13:46:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:25.064 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:35:25.064 { 00:35:25.064 "nbd_device": "/dev/nbd0", 00:35:25.064 "bdev_name": "raid5f" 00:35:25.064 } 00:35:25.064 ]' 00:35:25.064 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:35:25.064 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:35:25.064 { 00:35:25.064 "nbd_device": "/dev/nbd0", 00:35:25.064 "bdev_name": "raid5f" 00:35:25.064 } 00:35:25.064 ]' 00:35:25.064 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:35:25.064 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:25.064 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:25.064 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:25.064 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:25.064 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:25.064 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:25.064 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:25.322 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:25.322 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:25.322 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:25.322 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:25.322 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:25.322 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:25.322 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:25.322 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:25.322 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:25.322 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:25.322 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:25.580 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:35:25.838 /dev/nbd0 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:25.838 1+0 records in 00:35:25.838 1+0 records out 00:35:25.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304223 s, 13.5 MB/s 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:25.838 13:46:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:26.096 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:35:26.096 { 00:35:26.096 "nbd_device": "/dev/nbd0", 00:35:26.096 "bdev_name": "raid5f" 00:35:26.096 } 00:35:26.096 ]' 00:35:26.096 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:35:26.096 { 00:35:26.096 "nbd_device": "/dev/nbd0", 00:35:26.096 "bdev_name": "raid5f" 00:35:26.096 } 00:35:26.096 ]' 00:35:26.096 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:35:26.354 256+0 records in 00:35:26.354 256+0 records out 00:35:26.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105657 s, 99.2 MB/s 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:35:26.354 256+0 records in 00:35:26.354 256+0 records out 00:35:26.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0364277 s, 28.8 MB/s 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:26.354 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:26.613 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:26.613 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:26.613 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:26.613 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:26.613 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:26.613 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:26.613 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:26.613 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:26.613 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:26.613 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:26.613 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:35:26.871 13:46:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:35:27.130 malloc_lvol_verify 00:35:27.130 13:46:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:35:27.389 dd231bbc-269f-4dbb-b1e1-fdb18d9d9561 00:35:27.389 13:46:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:35:27.647 fb1a52e0-a2f7-4f05-bae1-418abca28eee 00:35:27.647 13:46:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:35:27.905 /dev/nbd0 00:35:27.905 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:35:27.905 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:35:27.905 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:35:27.905 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:35:27.905 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:35:27.905 mke2fs 1.47.0 (5-Feb-2023) 00:35:27.905 Discarding device blocks: 0/4096 done 00:35:27.905 Creating filesystem with 4096 1k blocks and 1024 inodes 00:35:27.905 00:35:27.905 Allocating group tables: 0/1 done 00:35:27.905 Writing inode tables: 0/1 done 00:35:28.162 Creating journal (1024 blocks): done 00:35:28.162 Writing superblocks and filesystem accounting information: 0/1 done 00:35:28.162 00:35:28.162 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:28.162 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:28.162 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:28.162 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:28.162 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:28.162 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:28.162 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 102760 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 102760 ']' 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 102760 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 102760 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 102760' 00:35:28.420 killing process with pid 102760 00:35:28.420 13:46:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 102760 00:35:28.421 13:46:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 102760 00:35:28.689 13:46:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:35:28.689 00:35:28.689 real 0m5.266s 00:35:28.689 user 0m8.104s 00:35:28.689 sys 0m1.275s 00:35:28.689 13:46:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:28.689 13:46:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:35:28.689 ************************************ 00:35:28.689 END TEST bdev_nbd 00:35:28.689 ************************************ 00:35:28.689 13:46:42 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:35:28.689 13:46:42 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:35:28.689 13:46:42 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:35:28.689 13:46:42 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:35:28.689 13:46:42 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:28.689 13:46:42 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:28.689 13:46:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:28.689 ************************************ 00:35:28.689 START TEST bdev_fio 00:35:28.689 ************************************ 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:35:28.689 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:28.689 13:46:42 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:35:28.984 ************************************ 00:35:28.984 START TEST bdev_fio_rw_verify 00:35:28.984 ************************************ 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:28.984 13:46:42 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:35:28.984 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:35:28.984 fio-3.35 00:35:28.984 Starting 1 thread 00:35:41.191 00:35:41.191 job_raid5f: (groupid=0, jobs=1): err= 0: pid=102948: Mon Oct 28 13:46:53 2024 00:35:41.191 read: IOPS=9197, BW=35.9MiB/s (37.7MB/s)(359MiB/10001msec) 00:35:41.191 slat (usec): min=21, max=112, avg=26.01, stdev= 6.17 00:35:41.191 clat (usec): min=12, max=503, avg=171.38, stdev=64.40 00:35:41.191 lat (usec): min=35, max=540, avg=197.39, stdev=65.41 00:35:41.191 clat percentiles (usec): 00:35:41.191 | 50.000th=[ 169], 99.000th=[ 310], 99.900th=[ 371], 99.990th=[ 416], 00:35:41.191 | 99.999th=[ 502] 00:35:41.191 write: IOPS=9669, BW=37.8MiB/s (39.6MB/s)(373MiB/9883msec); 0 zone resets 00:35:41.191 slat (usec): min=10, max=180, avg=22.36, stdev= 6.83 00:35:41.191 clat (usec): min=76, max=823, avg=397.24, stdev=61.91 00:35:41.191 lat (usec): min=96, max=903, avg=419.60, stdev=63.95 00:35:41.191 clat percentiles (usec): 00:35:41.191 | 50.000th=[ 396], 99.000th=[ 570], 99.900th=[ 693], 99.990th=[ 783], 00:35:41.191 | 99.999th=[ 824] 00:35:41.191 bw ( KiB/s): min=33776, max=41648, per=98.28%, avg=38011.37, stdev=2249.67, samples=19 00:35:41.191 iops : min= 8444, max=10412, avg=9502.84, stdev=562.42, samples=19 00:35:41.191 lat (usec) : 20=0.01%, 50=0.01%, 100=8.79%, 250=34.30%, 500=54.72% 00:35:41.191 lat (usec) : 750=2.18%, 1000=0.01% 00:35:41.191 cpu : usr=98.44%, sys=0.80%, ctx=23, majf=0, minf=10983 00:35:41.191 IO depths : 1=7.6%, 2=19.8%, 4=55.3%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:41.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.191 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:41.191 issued rwts: total=91984,95562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:41.191 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:41.191 00:35:41.191 Run status group 0 (all jobs): 00:35:41.191 READ: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=359MiB (377MB), run=10001-10001msec 00:35:41.191 WRITE: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=373MiB (391MB), run=9883-9883msec 00:35:41.191 ----------------------------------------------------- 00:35:41.191 Suppressions used: 00:35:41.191 count bytes template 00:35:41.191 1 7 /usr/src/fio/parse.c 00:35:41.191 708 67968 /usr/src/fio/iolog.c 00:35:41.191 1 8 libtcmalloc_minimal.so 00:35:41.191 1 904 libcrypto.so 00:35:41.191 ----------------------------------------------------- 00:35:41.191 00:35:41.191 00:35:41.191 real 0m11.390s 00:35:41.191 user 0m11.576s 00:35:41.191 sys 0m0.730s 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:35:41.191 ************************************ 00:35:41.191 END TEST bdev_fio_rw_verify 00:35:41.191 ************************************ 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "7a1e3ba4-2415-4023-ad8d-263a255a5a2b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7a1e3ba4-2415-4023-ad8d-263a255a5a2b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "7a1e3ba4-2415-4023-ad8d-263a255a5a2b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "53b52392-8e99-488d-923c-4369331d0151",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "fbf16643-531d-4a08-a609-c9780c2c245e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "300e8131-f69c-4527-b8ad-d7971f204c2d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:35:41.191 /home/vagrant/spdk_repo/spdk 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:35:41.191 00:35:41.191 real 0m11.598s 00:35:41.191 user 0m11.676s 00:35:41.191 sys 0m0.813s 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:41.191 13:46:54 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:35:41.191 ************************************ 00:35:41.191 END TEST bdev_fio 00:35:41.191 ************************************ 00:35:41.191 13:46:54 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:41.191 13:46:54 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:35:41.191 13:46:54 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:35:41.191 13:46:54 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:41.191 13:46:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:41.191 ************************************ 00:35:41.191 START TEST bdev_verify 00:35:41.191 ************************************ 00:35:41.191 13:46:54 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:35:41.191 [2024-10-28 13:46:54.472829] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:35:41.191 [2024-10-28 13:46:54.473014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103101 ] 00:35:41.191 [2024-10-28 13:46:54.625858] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:41.191 [2024-10-28 13:46:54.660454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:41.191 [2024-10-28 13:46:54.705047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.191 [2024-10-28 13:46:54.705100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.191 Running I/O for 5 seconds... 00:35:43.053 14565.00 IOPS, 56.89 MiB/s [2024-10-28T13:46:58.144Z] 14631.50 IOPS, 57.15 MiB/s [2024-10-28T13:46:59.076Z] 14457.33 IOPS, 56.47 MiB/s [2024-10-28T13:47:00.009Z] 14293.50 IOPS, 55.83 MiB/s [2024-10-28T13:47:00.009Z] 14440.60 IOPS, 56.41 MiB/s 00:35:45.849 Latency(us) 00:35:45.849 [2024-10-28T13:47:00.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:45.849 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:45.849 Verification LBA range: start 0x0 length 0x2000 00:35:45.849 raid5f : 5.02 7190.35 28.09 0.00 0.00 26794.14 237.38 20494.89 00:35:45.849 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:45.849 Verification LBA range: start 0x2000 length 0x2000 00:35:45.849 raid5f : 5.02 7251.53 28.33 0.00 0.00 26546.82 286.72 20494.89 00:35:45.849 [2024-10-28T13:47:00.009Z] =================================================================================================================== 00:35:45.849 [2024-10-28T13:47:00.009Z] Total : 14441.87 56.41 0.00 0.00 26669.90 237.38 20494.89 00:35:46.107 00:35:46.107 real 0m5.841s 00:35:46.107 user 0m10.784s 00:35:46.107 sys 0m0.286s 00:35:46.107 13:47:00 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:46.107 13:47:00 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:35:46.107 ************************************ 00:35:46.107 END TEST bdev_verify 00:35:46.107 ************************************ 00:35:46.107 13:47:00 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:35:46.107 13:47:00 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:35:46.107 13:47:00 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:46.107 13:47:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:46.365 ************************************ 00:35:46.365 START TEST bdev_verify_big_io 00:35:46.365 ************************************ 00:35:46.365 13:47:00 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:35:46.365 [2024-10-28 13:47:00.351791] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:35:46.365 [2024-10-28 13:47:00.351974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103177 ] 00:35:46.365 [2024-10-28 13:47:00.494880] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:46.365 [2024-10-28 13:47:00.521137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:46.623 [2024-10-28 13:47:00.562193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.623 [2024-10-28 13:47:00.562271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.880 Running I/O for 5 seconds... 00:35:49.189 568.00 IOPS, 35.50 MiB/s [2024-10-28T13:47:04.292Z] 665.00 IOPS, 41.56 MiB/s [2024-10-28T13:47:05.227Z] 697.33 IOPS, 43.58 MiB/s [2024-10-28T13:47:06.162Z] 744.75 IOPS, 46.55 MiB/s [2024-10-28T13:47:06.162Z] 761.20 IOPS, 47.58 MiB/s 00:35:52.002 Latency(us) 00:35:52.002 [2024-10-28T13:47:06.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.002 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:52.002 Verification LBA range: start 0x0 length 0x200 00:35:52.002 raid5f : 5.27 372.97 23.31 0.00 0.00 8431339.51 175.01 364141.85 00:35:52.002 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:52.002 Verification LBA range: start 0x200 length 0x200 00:35:52.002 raid5f : 5.27 373.38 23.34 0.00 0.00 8399144.00 249.48 364141.85 00:35:52.002 [2024-10-28T13:47:06.162Z] =================================================================================================================== 00:35:52.002 [2024-10-28T13:47:06.162Z] Total : 746.36 46.65 0.00 0.00 8415241.75 175.01 364141.85 00:35:52.261 00:35:52.261 real 0m6.072s 00:35:52.261 user 0m11.295s 00:35:52.261 sys 0m0.268s 00:35:52.261 13:47:06 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:52.261 ************************************ 00:35:52.261 END TEST bdev_verify_big_io 00:35:52.261 13:47:06 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:35:52.261 ************************************ 00:35:52.261 13:47:06 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:52.261 13:47:06 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:35:52.261 13:47:06 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:52.261 13:47:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:52.261 ************************************ 00:35:52.261 START TEST bdev_write_zeroes 00:35:52.261 ************************************ 00:35:52.261 13:47:06 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:52.519 [2024-10-28 13:47:06.494247] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:35:52.519 [2024-10-28 13:47:06.494491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103259 ] 00:35:52.519 [2024-10-28 13:47:06.649626] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:52.777 [2024-10-28 13:47:06.680304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.777 [2024-10-28 13:47:06.733268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.035 Running I/O for 1 seconds... 00:35:53.968 21975.00 IOPS, 85.84 MiB/s 00:35:53.968 Latency(us) 00:35:53.968 [2024-10-28T13:47:08.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:53.968 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:53.968 raid5f : 1.01 21977.29 85.85 0.00 0.00 5804.17 1824.58 9294.20 00:35:53.968 [2024-10-28T13:47:08.128Z] =================================================================================================================== 00:35:53.968 [2024-10-28T13:47:08.128Z] Total : 21977.29 85.85 0.00 0.00 5804.17 1824.58 9294.20 00:35:54.227 00:35:54.227 real 0m1.807s 00:35:54.227 user 0m1.433s 00:35:54.227 sys 0m0.259s 00:35:54.227 13:47:08 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:54.227 ************************************ 00:35:54.227 13:47:08 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:35:54.227 END TEST bdev_write_zeroes 00:35:54.227 ************************************ 00:35:54.227 13:47:08 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:54.227 13:47:08 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:35:54.227 13:47:08 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:54.227 13:47:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:54.227 ************************************ 00:35:54.227 START TEST bdev_json_nonenclosed 00:35:54.227 ************************************ 00:35:54.227 13:47:08 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:54.227 [2024-10-28 13:47:08.359766] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:35:54.227 [2024-10-28 13:47:08.359964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103300 ] 00:35:54.486 [2024-10-28 13:47:08.514509] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:54.486 [2024-10-28 13:47:08.541037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.486 [2024-10-28 13:47:08.581050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.486 [2024-10-28 13:47:08.581206] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:35:54.486 [2024-10-28 13:47:08.581235] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:35:54.486 [2024-10-28 13:47:08.581248] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:54.745 00:35:54.745 real 0m0.420s 00:35:54.745 user 0m0.178s 00:35:54.745 sys 0m0.138s 00:35:54.745 13:47:08 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:54.745 13:47:08 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:35:54.745 ************************************ 00:35:54.745 END TEST bdev_json_nonenclosed 00:35:54.745 ************************************ 00:35:54.745 13:47:08 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:54.745 13:47:08 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:35:54.745 13:47:08 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:54.745 13:47:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:54.745 ************************************ 00:35:54.745 START TEST bdev_json_nonarray 00:35:54.745 ************************************ 00:35:54.745 13:47:08 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:54.745 [2024-10-28 13:47:08.811422] Starting SPDK v25.01-pre git sha1 83ba90867 / DPDK 24.11.0-rc1 initialization... 00:35:54.745 [2024-10-28 13:47:08.811595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103327 ] 00:35:55.003 [2024-10-28 13:47:08.948961] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:55.004 [2024-10-28 13:47:08.973522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.004 [2024-10-28 13:47:09.013804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:55.004 [2024-10-28 13:47:09.013968] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:35:55.004 [2024-10-28 13:47:09.014000] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:35:55.004 [2024-10-28 13:47:09.014014] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:55.004 00:35:55.004 real 0m0.377s 00:35:55.004 user 0m0.158s 00:35:55.004 sys 0m0.115s 00:35:55.004 13:47:09 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:55.004 13:47:09 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:35:55.004 ************************************ 00:35:55.004 END TEST bdev_json_nonarray 00:35:55.004 ************************************ 00:35:55.004 13:47:09 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:35:55.004 13:47:09 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:35:55.004 13:47:09 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:35:55.004 13:47:09 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:35:55.004 13:47:09 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:35:55.004 13:47:09 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:35:55.004 13:47:09 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:55.004 13:47:09 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:35:55.004 13:47:09 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:35:55.004 13:47:09 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:35:55.004 13:47:09 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:35:55.004 00:35:55.004 real 0m36.338s 00:35:55.004 user 0m50.501s 00:35:55.004 sys 0m4.685s 00:35:55.004 13:47:09 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:55.004 13:47:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:35:55.004 ************************************ 00:35:55.004 END TEST blockdev_raid5f 00:35:55.004 ************************************ 00:35:55.262 13:47:09 -- spdk/autotest.sh@194 -- # uname -s 00:35:55.262 13:47:09 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:35:55.262 13:47:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:35:55.262 13:47:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:35:55.262 13:47:09 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@256 -- # timing_exit lib 00:35:55.262 13:47:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:55.262 13:47:09 -- common/autotest_common.sh@10 -- # set +x 00:35:55.262 13:47:09 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:55.262 13:47:09 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:35:55.262 13:47:09 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:55.262 13:47:09 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:55.262 13:47:09 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:35:55.262 13:47:09 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:35:55.262 13:47:09 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:35:55.262 13:47:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:55.262 13:47:09 -- common/autotest_common.sh@10 -- # set +x 00:35:55.262 13:47:09 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:35:55.262 13:47:09 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:35:55.262 13:47:09 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:35:55.262 13:47:09 -- common/autotest_common.sh@10 -- # set +x 00:35:57.174 INFO: APP EXITING 00:35:57.174 INFO: killing all VMs 00:35:57.174 INFO: killing vhost app 00:35:57.174 INFO: EXIT DONE 00:35:57.174 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:57.174 Waiting for block devices as requested 00:35:57.174 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:57.432 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:57.998 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:57.998 Cleaning 00:35:57.998 Removing: /var/run/dpdk/spdk0/config 00:35:57.998 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:57.998 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:57.998 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:57.998 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:57.998 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:57.998 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:57.998 Removing: /dev/shm/spdk_tgt_trace.pid70508 00:35:57.998 Removing: /var/run/dpdk/spdk0 00:35:57.998 Removing: /var/run/dpdk/spdk_pid100128 00:35:57.998 Removing: /var/run/dpdk/spdk_pid100451 00:35:57.998 Removing: /var/run/dpdk/spdk_pid101389 00:35:57.998 Removing: /var/run/dpdk/spdk_pid101706 00:35:57.998 Removing: /var/run/dpdk/spdk_pid102383 00:35:58.257 Removing: /var/run/dpdk/spdk_pid102641 00:35:58.257 Removing: /var/run/dpdk/spdk_pid102685 00:35:58.257 Removing: /var/run/dpdk/spdk_pid102706 00:35:58.257 Removing: /var/run/dpdk/spdk_pid102933 00:35:58.257 Removing: /var/run/dpdk/spdk_pid103101 00:35:58.257 Removing: /var/run/dpdk/spdk_pid103177 00:35:58.257 Removing: /var/run/dpdk/spdk_pid103259 00:35:58.257 Removing: /var/run/dpdk/spdk_pid103300 00:35:58.257 Removing: /var/run/dpdk/spdk_pid103327 00:35:58.257 Removing: /var/run/dpdk/spdk_pid70339 00:35:58.257 Removing: /var/run/dpdk/spdk_pid70508 00:35:58.257 Removing: /var/run/dpdk/spdk_pid70721 00:35:58.257 Removing: /var/run/dpdk/spdk_pid70803 00:35:58.257 Removing: /var/run/dpdk/spdk_pid70837 00:35:58.257 Removing: /var/run/dpdk/spdk_pid70948 00:35:58.257 Removing: /var/run/dpdk/spdk_pid70966 00:35:58.257 Removing: /var/run/dpdk/spdk_pid71154 00:35:58.257 Removing: /var/run/dpdk/spdk_pid71240 00:35:58.257 Removing: /var/run/dpdk/spdk_pid71327 00:35:58.257 Removing: /var/run/dpdk/spdk_pid71427 00:35:58.257 Removing: /var/run/dpdk/spdk_pid71513 00:35:58.257 Removing: /var/run/dpdk/spdk_pid71548 00:35:58.257 Removing: /var/run/dpdk/spdk_pid71590 00:35:58.257 Removing: /var/run/dpdk/spdk_pid71657 00:35:58.257 Removing: /var/run/dpdk/spdk_pid71763 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72229 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72282 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72334 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72350 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72424 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72440 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72509 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72530 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72578 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72596 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72644 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72662 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72800 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72831 00:35:58.257 Removing: /var/run/dpdk/spdk_pid72920 00:35:58.257 Removing: /var/run/dpdk/spdk_pid74135 00:35:58.257 Removing: /var/run/dpdk/spdk_pid74340 00:35:58.257 Removing: /var/run/dpdk/spdk_pid74476 00:35:58.257 Removing: /var/run/dpdk/spdk_pid75108 00:35:58.257 Removing: /var/run/dpdk/spdk_pid75309 00:35:58.257 Removing: /var/run/dpdk/spdk_pid75443 00:35:58.257 Removing: /var/run/dpdk/spdk_pid76070 00:35:58.257 Removing: /var/run/dpdk/spdk_pid76400 00:35:58.257 Removing: /var/run/dpdk/spdk_pid76529 00:35:58.257 Removing: /var/run/dpdk/spdk_pid77914 00:35:58.257 Removing: /var/run/dpdk/spdk_pid78162 00:35:58.257 Removing: /var/run/dpdk/spdk_pid78302 00:35:58.257 Removing: /var/run/dpdk/spdk_pid79676 00:35:58.257 Removing: /var/run/dpdk/spdk_pid79929 00:35:58.257 Removing: /var/run/dpdk/spdk_pid80058 00:35:58.257 Removing: /var/run/dpdk/spdk_pid81442 00:35:58.257 Removing: /var/run/dpdk/spdk_pid81882 00:35:58.257 Removing: /var/run/dpdk/spdk_pid82018 00:35:58.257 Removing: /var/run/dpdk/spdk_pid83494 00:35:58.257 Removing: /var/run/dpdk/spdk_pid83753 00:35:58.257 Removing: /var/run/dpdk/spdk_pid83893 00:35:58.257 Removing: /var/run/dpdk/spdk_pid85376 00:35:58.257 Removing: /var/run/dpdk/spdk_pid85634 00:35:58.257 Removing: /var/run/dpdk/spdk_pid85767 00:35:58.257 Removing: /var/run/dpdk/spdk_pid87248 00:35:58.257 Removing: /var/run/dpdk/spdk_pid87730 00:35:58.257 Removing: /var/run/dpdk/spdk_pid87857 00:35:58.257 Removing: /var/run/dpdk/spdk_pid87994 00:35:58.257 Removing: /var/run/dpdk/spdk_pid88428 00:35:58.257 Removing: /var/run/dpdk/spdk_pid89179 00:35:58.257 Removing: /var/run/dpdk/spdk_pid89554 00:35:58.257 Removing: /var/run/dpdk/spdk_pid90244 00:35:58.257 Removing: /var/run/dpdk/spdk_pid90725 00:35:58.257 Removing: /var/run/dpdk/spdk_pid91523 00:35:58.257 Removing: /var/run/dpdk/spdk_pid91932 00:35:58.257 Removing: /var/run/dpdk/spdk_pid93888 00:35:58.257 Removing: /var/run/dpdk/spdk_pid94326 00:35:58.257 Removing: /var/run/dpdk/spdk_pid94761 00:35:58.257 Removing: /var/run/dpdk/spdk_pid96841 00:35:58.257 Removing: /var/run/dpdk/spdk_pid97326 00:35:58.257 Removing: /var/run/dpdk/spdk_pid97818 00:35:58.257 Removing: /var/run/dpdk/spdk_pid98869 00:35:58.257 Removing: /var/run/dpdk/spdk_pid99192 00:35:58.257 Clean 00:35:58.515 13:47:12 -- common/autotest_common.sh@1449 -- # return 0 00:35:58.515 13:47:12 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:35:58.516 13:47:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:58.516 13:47:12 -- common/autotest_common.sh@10 -- # set +x 00:35:58.516 13:47:12 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:35:58.516 13:47:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:58.516 13:47:12 -- common/autotest_common.sh@10 -- # set +x 00:35:58.516 13:47:12 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:58.516 13:47:12 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:58.516 13:47:12 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:58.516 13:47:12 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:35:58.516 13:47:12 -- spdk/autotest.sh@394 -- # hostname 00:35:58.516 13:47:12 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:58.774 geninfo: WARNING: invalid characters removed from testname! 00:36:25.303 13:47:37 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:26.678 13:47:40 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:29.959 13:47:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:32.488 13:47:46 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:35.074 13:47:48 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:37.602 13:47:51 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:40.139 13:47:53 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:40.139 13:47:53 -- common/autotest_common.sh@1688 -- $ [[ y == y ]] 00:36:40.139 13:47:53 -- common/autotest_common.sh@1689 -- $ lcov --version 00:36:40.139 13:47:53 -- common/autotest_common.sh@1689 -- $ awk '{print $NF}' 00:36:40.139 13:47:53 -- common/autotest_common.sh@1689 -- $ lt 1.15 2 00:36:40.139 13:47:53 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:36:40.139 13:47:53 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:36:40.139 13:47:53 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:36:40.139 13:47:53 -- scripts/common.sh@336 -- $ IFS=.-: 00:36:40.139 13:47:53 -- scripts/common.sh@336 -- $ read -ra ver1 00:36:40.139 13:47:53 -- scripts/common.sh@337 -- $ IFS=.-: 00:36:40.139 13:47:53 -- scripts/common.sh@337 -- $ read -ra ver2 00:36:40.139 13:47:53 -- scripts/common.sh@338 -- $ local 'op=<' 00:36:40.139 13:47:53 -- scripts/common.sh@340 -- $ ver1_l=2 00:36:40.139 13:47:53 -- scripts/common.sh@341 -- $ ver2_l=1 00:36:40.139 13:47:53 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:36:40.139 13:47:53 -- scripts/common.sh@344 -- $ case "$op" in 00:36:40.139 13:47:53 -- scripts/common.sh@345 -- $ : 1 00:36:40.139 13:47:53 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:36:40.139 13:47:53 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:40.139 13:47:53 -- scripts/common.sh@365 -- $ decimal 1 00:36:40.139 13:47:53 -- scripts/common.sh@353 -- $ local d=1 00:36:40.139 13:47:53 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:36:40.139 13:47:53 -- scripts/common.sh@355 -- $ echo 1 00:36:40.139 13:47:53 -- scripts/common.sh@365 -- $ ver1[v]=1 00:36:40.139 13:47:53 -- scripts/common.sh@366 -- $ decimal 2 00:36:40.139 13:47:53 -- scripts/common.sh@353 -- $ local d=2 00:36:40.139 13:47:53 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:36:40.139 13:47:53 -- scripts/common.sh@355 -- $ echo 2 00:36:40.139 13:47:53 -- scripts/common.sh@366 -- $ ver2[v]=2 00:36:40.139 13:47:53 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:36:40.139 13:47:53 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:36:40.139 13:47:53 -- scripts/common.sh@368 -- $ return 0 00:36:40.140 13:47:53 -- common/autotest_common.sh@1690 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:40.140 13:47:53 -- common/autotest_common.sh@1702 -- $ export 'LCOV_OPTS= 00:36:40.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.140 --rc genhtml_branch_coverage=1 00:36:40.140 --rc genhtml_function_coverage=1 00:36:40.140 --rc genhtml_legend=1 00:36:40.140 --rc geninfo_all_blocks=1 00:36:40.140 --rc geninfo_unexecuted_blocks=1 00:36:40.140 00:36:40.140 ' 00:36:40.140 13:47:53 -- common/autotest_common.sh@1702 -- $ LCOV_OPTS=' 00:36:40.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.140 --rc genhtml_branch_coverage=1 00:36:40.140 --rc genhtml_function_coverage=1 00:36:40.140 --rc genhtml_legend=1 00:36:40.140 --rc geninfo_all_blocks=1 00:36:40.140 --rc geninfo_unexecuted_blocks=1 00:36:40.140 00:36:40.140 ' 00:36:40.140 13:47:53 -- common/autotest_common.sh@1703 -- $ export 'LCOV=lcov 00:36:40.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.140 --rc genhtml_branch_coverage=1 00:36:40.140 --rc genhtml_function_coverage=1 00:36:40.140 --rc genhtml_legend=1 00:36:40.140 --rc geninfo_all_blocks=1 00:36:40.140 --rc geninfo_unexecuted_blocks=1 00:36:40.140 00:36:40.140 ' 00:36:40.140 13:47:53 -- common/autotest_common.sh@1703 -- $ LCOV='lcov 00:36:40.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.140 --rc genhtml_branch_coverage=1 00:36:40.140 --rc genhtml_function_coverage=1 00:36:40.140 --rc genhtml_legend=1 00:36:40.140 --rc geninfo_all_blocks=1 00:36:40.140 --rc geninfo_unexecuted_blocks=1 00:36:40.140 00:36:40.140 ' 00:36:40.140 13:47:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:40.140 13:47:53 -- scripts/common.sh@15 -- $ shopt -s extglob 00:36:40.140 13:47:53 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:40.140 13:47:53 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.140 13:47:53 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.140 13:47:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.140 13:47:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.140 13:47:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.140 13:47:53 -- paths/export.sh@5 -- $ export PATH 00:36:40.140 13:47:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.140 13:47:53 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:36:40.140 13:47:53 -- common/autobuild_common.sh@486 -- $ date +%s 00:36:40.140 13:47:53 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730123273.XXXXXX 00:36:40.140 13:47:53 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730123273.ZpdVfn 00:36:40.140 13:47:53 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:36:40.140 13:47:53 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:36:40.140 13:47:53 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:36:40.140 13:47:53 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:36:40.140 13:47:53 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:36:40.140 13:47:53 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:36:40.140 13:47:53 -- common/autobuild_common.sh@502 -- $ get_config_params 00:36:40.140 13:47:53 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:36:40.140 13:47:53 -- common/autotest_common.sh@10 -- $ set +x 00:36:40.140 13:47:53 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:36:40.140 13:47:53 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:36:40.140 13:47:53 -- pm/common@17 -- $ local monitor 00:36:40.140 13:47:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:40.140 13:47:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:40.140 13:47:53 -- pm/common@25 -- $ sleep 1 00:36:40.140 13:47:53 -- pm/common@21 -- $ date +%s 00:36:40.140 13:47:53 -- pm/common@21 -- $ date +%s 00:36:40.140 13:47:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1730123274 00:36:40.140 13:47:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1730123274 00:36:40.140 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1730123274_collect-vmstat.pm.log 00:36:40.140 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1730123274_collect-cpu-load.pm.log 00:36:41.085 13:47:54 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:36:41.085 13:47:55 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:36:41.085 13:47:55 -- spdk/autopackage.sh@14 -- $ timing_finish 00:36:41.085 13:47:55 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:41.085 13:47:55 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:41.085 13:47:55 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:41.085 13:47:55 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:41.085 13:47:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:41.085 13:47:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:41.085 13:47:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:41.086 13:47:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:36:41.086 13:47:55 -- pm/common@44 -- $ pid=104827 00:36:41.086 13:47:55 -- pm/common@50 -- $ kill -TERM 104827 00:36:41.086 13:47:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:41.086 13:47:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:36:41.086 13:47:55 -- pm/common@44 -- $ pid=104828 00:36:41.086 13:47:55 -- pm/common@50 -- $ kill -TERM 104828 00:36:41.086 + [[ -n 5980 ]] 00:36:41.086 + sudo kill 5980 00:36:41.094 [Pipeline] } 00:36:41.113 [Pipeline] // timeout 00:36:41.120 [Pipeline] } 00:36:41.135 [Pipeline] // stage 00:36:41.140 [Pipeline] } 00:36:41.157 [Pipeline] // catchError 00:36:41.168 [Pipeline] stage 00:36:41.170 [Pipeline] { (Stop VM) 00:36:41.183 [Pipeline] sh 00:36:41.458 + vagrant halt 00:36:44.742 ==> default: Halting domain... 00:36:51.312 [Pipeline] sh 00:36:51.647 + vagrant destroy -f 00:36:54.929 ==> default: Removing domain... 00:36:54.939 [Pipeline] sh 00:36:55.219 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:36:55.228 [Pipeline] } 00:36:55.243 [Pipeline] // stage 00:36:55.248 [Pipeline] } 00:36:55.262 [Pipeline] // dir 00:36:55.267 [Pipeline] } 00:36:55.281 [Pipeline] // wrap 00:36:55.287 [Pipeline] } 00:36:55.299 [Pipeline] // catchError 00:36:55.308 [Pipeline] stage 00:36:55.311 [Pipeline] { (Epilogue) 00:36:55.323 [Pipeline] sh 00:36:55.607 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:02.180 [Pipeline] catchError 00:37:02.183 [Pipeline] { 00:37:02.199 [Pipeline] sh 00:37:02.482 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:02.740 Artifacts sizes are good 00:37:02.749 [Pipeline] } 00:37:02.763 [Pipeline] // catchError 00:37:02.773 [Pipeline] archiveArtifacts 00:37:02.779 Archiving artifacts 00:37:02.895 [Pipeline] cleanWs 00:37:02.924 [WS-CLEANUP] Deleting project workspace... 00:37:02.925 [WS-CLEANUP] Deferred wipeout is used... 00:37:02.931 [WS-CLEANUP] done 00:37:02.933 [Pipeline] } 00:37:02.949 [Pipeline] // stage 00:37:02.956 [Pipeline] } 00:37:02.972 [Pipeline] // node 00:37:02.977 [Pipeline] End of Pipeline 00:37:03.019 Finished: SUCCESS